Nothing Special   »   [go: up one dir, main page]

WO2022088964A1 - 一种电子设备的控制方法和装置 - Google Patents

一种电子设备的控制方法和装置 Download PDF

Info

Publication number
WO2022088964A1
WO2022088964A1 PCT/CN2021/116074 CN2021116074W WO2022088964A1 WO 2022088964 A1 WO2022088964 A1 WO 2022088964A1 CN 2021116074 W CN2021116074 W CN 2021116074W WO 2022088964 A1 WO2022088964 A1 WO 2022088964A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
user
voice
voice assistant
target
Prior art date
Application number
PCT/CN2021/116074
Other languages
English (en)
French (fr)
Inventor
吴金娴
潘邵武
许翔
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21884695.4A priority Critical patent/EP4221172A4/en
Priority to US18/250,511 priority patent/US20230410806A1/en
Publication of WO2022088964A1 publication Critical patent/WO2022088964A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4936Speech interaction details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42365Presence services providing information on the willingness to communicate or the ability to communicate in terms of media capability or network connectivity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/50Centralised arrangements for answering calls; Centralised arrangements for recording messages for absent or busy subscribers ; Centralised arrangements for recording messages
    • H04M3/527Centralised call answering arrangements not requiring operator intervention
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72409User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
    • H04M1/72412User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories using two-way short-range wireless interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/72442User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for playing music files
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72451User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72457User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to geographic location
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2203/00Aspects of automatic or semi-automatic exchanges
    • H04M2203/20Aspects of automatic or semi-automatic exchanges related to features of supplementary services
    • H04M2203/2094Proximity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/74Details of telephonic subscriber devices with voice recognition means

Definitions

  • the present application relates to the technical field of intelligent terminals, and in particular, to a control method and device for electronic equipment.
  • electronic devices can intelligently interact with users through intelligent dialogue and instant question and answer, which can help users solve problems and provide users with intelligent and convenient voice assistant services.
  • the voice assistant service of the current electronic device can only take into account the needs of the user alone, and cannot fully take into account the environment in which the user is located. Therefore, current voice assistant services are not smart enough to meet the needs of multiple people.
  • the present application relates to a control method and device for an electronic device to improve the performance of a voice assistant service.
  • an embodiment of the present application provides a control method for an electronic device.
  • the method may be executed by the electronic device provided in the embodiment of the present application, or executed by a chip with similar functions of the electronic device.
  • the electronic device may receive a voice instruction input by the user through the voice assistant on the electronic device.
  • the electronic device may determine the current user status of at least one user in the area to which it belongs.
  • the electronic device may respond to the input voice command according to the current user state of the at least one user.
  • the electronic device can determine the current user status of at least one user in the area to which it belongs when receiving the voice command, and can respond to the input voice command according to the acquired current user status, so more users can be considered. , so that the voice assistant can provide services to users more intelligently and improve the performance of the voice assistant.
  • the electronic device when the electronic device determines the current user state of at least one user in the area to which it belongs, it may determine at least one target device in the area to which it belongs. The electronic device may send the first request message to the at least one target device. The first request message can be used to obtain the current user status. At least one target device can obtain the current user status within the range that can be monitored, and send it to the electronic device. The electronic device can receive at least one current user state from at least one target device.
  • the electronic device can determine at least one target device in the area to which it belongs, and acquire the current user status of at least one user through communication with the at least one target device.
  • the electronic device may execute the operation corresponding to the voice instruction.
  • the first user state here represents the noise environment required by the user. If the first user state does not exist in the at least one current user state, the electronic device may search for at least one peripheral device in the current network connection. The electronic device may execute the operation corresponding to the voice instruction through at least one peripheral device.
  • the electronic device can choose different ways to execute the input voice command according to the noise environment required by the user, so that the voice assistant is more intelligent and takes into account the needs of more people.
  • At least one target device has a target user identity
  • the electronic device has a user identity.
  • the user ID here and the target user ID are in the same voice assistant group.
  • devices of different users can be added to the same voice assistant group through the user ID, so that the communication between users can be more convenient through the voice assistant group.
  • the electronic device may generate the first information in response to the voice instruction.
  • the voice commands here contain event information and time points. Therefore, the first information may also include event information and time points.
  • the electronic device may send the first information to at least one target device.
  • the electronic device can send the reminder message set for other users to at least one target device through the voice assistant group, so that the voice assistant is more intelligent.
  • the present application provides a control method for a first electronic device.
  • the method may be performed by the electronic device provided in the present application, or by a chip similar to the function of the electronic device.
  • the electronic device may receive the first request message from the first electronic device.
  • the first request message may be used by the first electronic device to obtain the current user status.
  • the electronic device may acquire the current user state, and send the current user state to the first electronic device.
  • the electronic device can obtain the current user state according to the request message of the first electronic device and send it to the first electronic device, so that the first electronic device can execute the voice command input by the user according to the current user state, and the voice assistant can be enabled
  • the service takes into account the needs of more people and improves the performance of voice assistant services.
  • the electronic device may acquire the current user state by using a sensor, and/or collect the setting information of the user, to acquire the current user state.
  • the electronic device can quickly and conveniently obtain the current user state according to the information set by the sensor or the user.
  • At least one electronic device has a target user identity, and the first electronic device has a user identity.
  • the user ID here and the target user ID are in the same voice assistant group.
  • devices of different users can be added to the same voice assistant group through the user ID, so that the communication between users can be more convenient through the voice assistant group.
  • the electronic device may receive the first information.
  • the first information may include event information and a time point.
  • the electronic device may display event information according to the time point.
  • the electronic device can receive reminder messages set for itself from other users, and display the reminder messages at the reminder time point.
  • an embodiment of the present application provides a control method for an electronic device.
  • the method may be executed by the electronic device provided in the embodiment of the present application, or executed by a chip with similar functions of the electronic device.
  • the electronic device can receive the voice command input by the user through the voice assistant; the electronic device can respond to the voice command and send the voice command to the second electronic device; the electronic device has a first user The second electronic device has a second user identification; the first user identification and the second user identification are in the same voice assistant group.
  • the electronic device can generate reminder messages for other users in the group through the voice assistant group, and different users can communicate through the voice assistant group, making the voice assistant service more intelligent.
  • the electronic device may respond to the voice instruction to generate a corresponding first message; the first message may include event information and a time point; the electronic device may send the first message to all and the second electronic device, so that the second electronic device can display time information according to the time point.
  • the electronic device can generate a corresponding reminder message according to the voice command input by the user, and send it to other users in the voice assistant group, so that other users can receive the reminder message.
  • the electronic device may send the voice instruction to the voice assistant corresponding to the second user identification through the voice assistant on the electronic device.
  • the electronic device can send voice commands to the voice assistants of other users in the voice assistant group through the voice assistant, and can safely and quickly set reminder messages for other users.
  • an embodiment of the present application provides a control method for an electronic device.
  • the method may be executed by the electronic device provided in the embodiment of the present application, or executed by a chip with similar functions of the electronic device.
  • the electronic device can receive a voice command from the first electronic device, and the electronic device can generate the first message according to the voice command.
  • the first message here may contain event information and a time point; the electronic device may display the event information according to the time point; or
  • An electronic device may receive the first message from the first electronic device.
  • the first message here may contain event information and a time point.
  • the electronic device may display the event information according to the time point; the first electronic device has a first user identification, and the electronic device has a second user identification; the first user identification and the second user identification may in the same voice assistant group.
  • different users can set reminder messages for other users in the group through the voice assistant group. After receiving the reminder message, the user can remind the user when the reminder time point is reached, which can make the voice assistant service smarter.
  • the electronic device may receive the first message from the voice assistant of the first electronic device through the voice assistant.
  • the electronic device can receive reminder messages set for itself by other users through the voice assistant, and can receive the reminder messages safely and quickly.
  • an embodiment of the present application provides a chip, which is coupled to a memory in an electronic device, and is used to call a computer program stored in the memory and execute the first aspect and any possible design of the first aspect of the embodiment of the present application or implement the technical solution in the above-mentioned second aspect and any one of the possible implementations of the second aspect or implement the technology in the above-mentioned third aspect and any one of the possible implementations of the third aspect
  • the solution or the technical solution in the fourth aspect and any possible implementation manner of the fourth aspect is performed.
  • "Coupling" in the embodiments of the present application means that two components are directly or indirectly combined with each other.
  • an embodiment of the present application further provides a circuit system.
  • the circuitry may be one or more chips, such as a system-on-a-chip (SoC).
  • SoC system-on-a-chip
  • the circuit system includes: at least one processing circuit; the at least one processing circuit is configured to execute the technical solution in the above-mentioned first aspect and any possible implementation manner of the first aspect or execute the above-mentioned second aspect and its The technical solution in any possible implementation manner of the second aspect or the technical solution in any possible implementation manner of the third aspect and the third aspect above, or the fourth aspect and the fourth aspect thereof.
  • an embodiment of the present application further provides an electronic device, where the electronic device includes a module/unit that executes the first aspect or any possible implementation manner of the first aspect; or the electronic device includes an The module/unit of the above-mentioned second aspect or any possible implementation manner of the second aspect; or the electronic device includes a module/unit that executes any of the above-mentioned third aspect and any possible implementation manner of the third aspect ; or the electronic device includes a module/unit for implementing the fourth aspect and any possible implementation manner of the fourth aspect.
  • These modules/units can be implemented by hardware or by executing corresponding software by hardware.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium includes a computer program, and when the computer program runs on an electronic device, the electronic device is made to execute the first embodiment of the present application.
  • the computer-readable storage medium includes a computer program
  • the electronic device is made to execute the first embodiment of the present application.
  • a ninth aspect a program product of the embodiments of the present application, including instructions, when the program product runs on an electronic device, causing the electronic device to execute the first aspect of the embodiments of the present application and any one of the first aspects thereof
  • the technical solutions in the possible implementation manners or the technical solutions in any possible implementation manners of the second aspect and the second aspect of the embodiments of the present application, or the implementation of any of the third aspects and the third aspects of the embodiments of the present application A technical solution in a possible implementation manner, or implement a technical solution in the fourth aspect and any one possible implementation manner of the fourth aspect of the embodiments of the present application.
  • 1A is one of the schematic diagrams of a voice assistant of an electronic device provided by an embodiment of the present application.
  • FIG. 1B is one of the schematic diagrams of the voice assistant of the electronic device provided by the embodiment of the application.
  • FIG. 2 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • 4A is a schematic diagram of a display interface for setting a user state provided by an embodiment of the present application.
  • FIG. 4B is a schematic diagram of a display interface for sharing location information of users according to an embodiment of the present application.
  • FIG. 5 is one of the exemplary flowcharts of the control method of the electronic device provided by the implementation of this application;
  • FIG. 6 is one of functional schematic diagrams of a voice assistant group provided by an embodiment of the present application.
  • FIG. 7 is one of functional schematic diagrams of a voice assistant group provided by an embodiment of the present application.
  • FIG. 8 is one of functional schematic diagrams of a voice assistant group provided by an embodiment of the present application.
  • 9A is one of functional schematic diagrams of a voice assistant group provided by an embodiment of the present application.
  • 9B is one of functional schematic diagrams of a voice assistant group provided by an embodiment of the present application.
  • FIG. 9C is one of the functional schematic diagrams of the voice assistant group provided by the embodiment of the present application.
  • FIG. 10 is one of the exemplary flowcharts of the control method of the electronic device provided by the embodiment of the present application.
  • 11A is one of the functional schematic diagrams of the voice assistant of the electronic device provided by the embodiment of the application.
  • FIG. 11B is one of the schematic diagrams of scenarios of a control method for an electronic device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a method for determining a target device in the same area provided by an embodiment of the present application.
  • FIG. 13A is one of the schematic diagrams of a scenario of a control method for an electronic device provided by an embodiment of the present application.
  • FIG. 13B is one of the schematic diagrams of the scenarios of the control method of the electronic device provided by the embodiment of the present application.
  • FIG. 14A is one of the schematic diagrams of scenarios of a control method for an electronic device provided by an embodiment of the present application.
  • FIG. 14B is one of the schematic diagrams of scenarios of a control method for an electronic device provided by an embodiment of the present application.
  • FIG. 14C is one of the schematic diagrams of the scenarios of the control method of the electronic device provided by the embodiment of the present application.
  • FIG. 14D is one of the schematic diagrams of the scenarios of the control method of the electronic device provided by the embodiment of the present application.
  • FIG. 15 is one of the schematic diagrams of scenarios of a control method for an electronic device provided by an embodiment of the present application.
  • FIG. 16 is a block diagram of an electronic device provided by an embodiment of the present application.
  • a user makes a schedule for himself through a voice assistant service. For example, the user can say "there is a meeting at 7 am", and the electronic device can receive the user's voice data and perform text recognition. The electronic device can create an agenda based on the identified content, that is, "there is a meeting at 7 o'clock", so that the user can be reminded at 7 o'clock.
  • the electronic device can recognize the user's voice and obtain relevant instructions, that is, instructions for playing music. At this time, the electronic device can open an application program capable of playing music, and play the music.
  • the voice assistant service of current electronic devices can only take into account the needs of one user, but cannot realize the interaction of multiple users.
  • the voice assistant service of current electronic devices cannot take into account the current user's environment. For example, user A wants to listen to music at home, while user B needs a quiet environment for studying at home. However, when the electronic device recognizes user A's voice "play music", it will not consider user B's needs, and will still open an application capable of playing music and play the music. In some embodiments, if the electronic device is connected with an external playback device, it can also play music through the external playback device. At this time, considering that user B needs a relatively quiet environment, user A can manually adjust the volume and lower the volume to avoid affecting user B.
  • an embodiment of the present application provides a control method for an electronic device, so as to avoid the above problems, so that the voice assistant service can meet the needs of multiple users, realize interaction between multiple users, and fully Consider the environment in which electronic devices are located, and provide users with more intelligent services.
  • the embodiment of the present application provides a control method for an electronic device, and the method can be applied to any electronic device, such as an electronic device with a curved screen, a full screen, a folding screen, and the like.
  • Electronic devices such as mobile phones, tablets, wearable devices (eg, watches, wristbands, smart helmets, etc.), in-vehicle devices, smart homes, augmented reality (AR)/virtual reality (VR) devices, notebooks Computer, ultra-mobile personal computer (UMPC), netbook, personal digital assistant (PDA), etc.
  • AR augmented reality
  • VR virtual reality
  • UMPC ultra-mobile personal computer
  • PDA personal digital assistant
  • the electronic device can determine the current environment through a sensor when receiving a voice command input by the user, so that an appropriate manner can be selected to execute the user's voice command. Therefore, the voice assistant service can be made to consider the needs of multiple users, and can provide services to users more intelligently.
  • references in this specification to "one embodiment” or “some embodiments” and the like mean that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application.
  • appearances of the phrases “in one embodiment,” “in some embodiments,” “in other embodiments,” “in other embodiments,” etc. in various places in this specification are not necessarily All refer to the same embodiment, but mean “one or more but not all embodiments” unless specifically emphasized otherwise.
  • the terms “including”, “including”, “having” and their variants mean “including but not limited to” unless specifically emphasized otherwise.
  • At least one involved in the embodiments of the present application includes one or more; wherein, multiple refers to greater than or equal to two.
  • words such as “first” and “second” are only used for the purpose of distinguishing the description, and should not be understood as indicating or implying relative importance, nor should it be understood as indicating or implied order.
  • a mobile phone is used as an example for description.
  • various application programs (application, app) may be installed in the mobile phone, which may be referred to as applications, which are software programs capable of realizing one or more specific functions.
  • applications may be installed in an electronic device, for example, instant messaging applications, video applications, audio applications, image capturing applications, and the like.
  • instant messaging applications for example, may include SMS applications, WeChat (WeChat), WhatsApp Messenger, Line, photo sharing (instagram), Kakao Talk, DingTalk, and the like.
  • Image capturing applications for example, may include camera applications (system cameras or third-party camera applications).
  • Video applications may include Youtube, Twitter, Douyin, iQiyi, Tencent Video, and so on.
  • Audio applications for example, may include Kugou Music, Xiami, QQ Music, and so on.
  • the applications mentioned in the following embodiments may be applications that have been installed when the electronic device leaves the factory, or may be applications downloaded by the user from the network or obtained from other electronic devices during the use of the electronic device.
  • the electronic device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charge management module 140, a power management module 141, a battery 142, Antenna 1, Antenna 2, Mobile Communication Module 150, Wireless Communication Module 160, Audio Module 170, Speaker 170A, Receiver 170B, Microphone 170C, Headphone Interface 170D, Sensor Module 180, Key 190, Motor 191, Indicator 192, Camera 193, Display screen 194, and subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100 . The controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the display screen 194 is used to display a display interface of an application, such as a viewfinder interface of a camera application.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store the operating system, and the software code of at least one application (eg, iQIYI application, WeChat application, etc.).
  • the storage data area may store data generated during the use of the electronic device 100 (eg, captured images, recorded videos, etc.) and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. Such as saving pictures, videos and other files in an external memory card.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the sensor module 180 may include a pressure sensor 180A, a touch sensor 180K, an ambient light sensor 180L, and the like.
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device is in the pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to unlock fingerprints, access application locks, take photos with fingerprints, and answer incoming calls with fingerprints.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device, which is different from the location where the display screen 194 is located.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback. For example, touch operations acting on different applications (such as taking pictures, playing audio, etc.) can correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card. The SIM card can be connected to and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • FIG. 2 do not constitute a specific limitation on the mobile phone, and the mobile phone may also include more or less components than those shown in the figure, or combine some components, or separate some components, or different components. component layout.
  • the electronic device shown in FIG. 2 is used as an example for description.
  • FIG. 3 shows a block diagram of a software structure of an electronic device provided by an embodiment of the present application.
  • the software structure of the electronic device can be a layered architecture, for example, the software can be divided into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer (framework, FWK), an Android runtime (Android runtime) and system libraries, and a kernel layer.
  • the application layer can include a series of application packages. As shown in FIG. 3, the application layer may include cameras, settings, skin modules, user interface (UI), third-party applications, and the like. Among them, the three-party applications can include WeChat, QQ, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, voice assistant function, etc.
  • UI user interface
  • the three-party applications can include WeChat, QQ, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, voice assistant function, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer can include some predefined functions. As shown in Figure 3, the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files, etc.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • the Android runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (media library), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library media library
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the system library can also include voice assistant services.
  • the voice assistant service can be used to recognize the voice data input by the user, identify the keywords contained in the voice data, and control the electronic device to perform related operations.
  • the electronic device may recognize the user's voice through the user's voice transmitted by the receiver 170B or the microphone 170C as shown in FIG. 2 . Assuming that the user's voice is "play a movie", the electronic device can recognize that the keywords are "play” and "movie", and the electronic device can open an application program capable of playing a movie to play the movie. Alternatively, the electronic device may play a movie that has already been stored.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • the hardware layer may include various types of sensors, for example, an acceleration sensor, a gyroscope sensor, and a touch sensor involved in the embodiments of the present application.
  • each user of the voice assistant service may have a user ID, and the user ID may be an ID that uniquely identifies a user. For example, it may be the user's phone number or may be a Huawei account number or the like.
  • the user can log in the user ID on the electronic device through the user ID and the preset password.
  • the user ID here can identify the identity of a user.
  • each user identification may be associated with the identification of at least one electronic device.
  • a user can log in a user ID on multiple electronic devices such as mobile phones, tablet computers, and laptop computers. Therefore, the identification of the electronic device associated with the user identification of the user may include the identification of the mobile phone, the identification of the tablet computer, and the identification of the notebook computer.
  • the user may set the identification of the electronic device associated with the user identification, or the user's voice assistant may determine the electronic device to which the user identification is logged in, thereby associating the electronic device with the logged in user identification with the user identification.
  • a user can own several devices, and a public device (if a large screen at home) can also be owned by several users.
  • the user ID may be the voice assistant ID used by the user.
  • the networking method may form a group for the voice assistants of different users.
  • the group information may include the user's device information.
  • user A issues an instruction, he can directly query the identifiers of other devices in the network through his own voice assistant, and directly send the instruction to user B's device without going through User B's voice assistant.
  • user A's voice assistant can find user B through the address book/application (such as WeChat, QQ and other instant messaging applications) and send a control message to user B's device, so that User B's device executes the corresponding command.
  • address book/application such as WeChat, QQ and other instant messaging applications
  • user A's electronic device When user A's electronic device sends an instruction to user B's electronic device or user B's voice assistant, it can also pop up a prompt to user B. After user B agrees, user B's device or user B's voice assistant executes the relevant instruction.
  • different users can communicate through user IDs. For example, user A wants to send a reminder message to user B to remind user B to make an appointment at 8 o'clock. User A may input the instruction "remind user B to make an appointment at 8" into electronic device A.
  • the electronic device A can look up the user B in the address book, for example, can look up the phone number named "user B" in the communication. If the electronic device A finds a phone number named "user B" in the address book, it can send a short message of "please make an appointment at 8 o'clock" to the phone number.
  • electronic device A may search user B's voice assistant according to user B's phone number, and send an input instruction "remind user B to make an appointment at 8" to user B's voice assistant, or a reminder message generated according to the instruction.
  • User B's voice assistant can send the instruction or reminder message to the electronic device associated with the user ID of user B.
  • User B's electronic device may display the reminder message, or may also generate a schedule for reminding at 8 o'clock.
  • user B's electronic device may seek user B's consent before generating a schedule for reminding at 8:00, for example, the display may display "User A reminds you to go to the appointment at 8:00, whether to generate a reminder at 8:00. Schedule", after user B agrees, user B's electronic device can generate a schedule for reminding at 8 o'clock.
  • Users can wake up the voice assistant service by entering voice data.
  • the user may wake up the voice assistant service by entering voice data specifying textual content.
  • the specified text content may be the voice data used by the user when registering the voice data for waking up the voice assistant service.
  • the electronic device can perform text recognition on the voice data to determine whether the specified text content exists. If the specified text content exists in the voice data, the electronic device enters the voice assistant service.
  • the user may wake up the voice assistant service by entering random voice data or voice data specifying text content.
  • the electronic device can acquire the voiceprint feature of the user according to the voice data input by the user.
  • the electronic device can compare the acquired voiceprint features with the stored voiceprint features, and when the comparison result indicates that the matching is successful, the electronic device can enter the voice assistant service.
  • the user can light up the display screen by touching the display screen, or by touching a physical button on the electronic device, or by using a preset air gesture.
  • the manner of touching the display screen may include, for example, clicking on the display screen, double-clicking the display screen, or drawing a preset pattern, such as a letter, on the screen.
  • the pattern here may be preset, or may also be specified by an electronic device, which is not specifically limited in this application.
  • the preset air gestures may include, for example, palm sliding to the right, palm sliding to the left, finger sliding to the right, or finger sliding to the left, etc.
  • the air gestures may be preset by the user, or may also be specified by the electronic device. There are no specific restrictions on the application.
  • the user can enter pre-set voice data, such as the user can say "hello".
  • the electronic device can receive the voice data input by the user as "Hello", and recognize that the voice data contains a wake-up word, so the electronic device enters the voice assistant service.
  • the screen can be turned on, and a prompt message can be displayed on the display screen to prompt the user to enter the voice assistant service.
  • the electronic device can display content such as "Where am I” or "What can I help you” on the display screen, prompting the user to continue inputting instructions.
  • the electronic device may not turn on the screen, that is, keep the screen in a dark state, and prompt the user to enter the voice assistant service by outputting voice data.
  • the electronic device can prompt the user to enter the voice assistant service by outputting voice data with content such as "Where am I” or "What can I help you with”.
  • the specified text content for waking up the voice assistant service may be pre-recorded by the user on the electronic device, or may also be specified by the electronic device.
  • the voiceprint can be registered on the electronic device in advance.
  • the electronic device can prompt the user "please say hello" through the display screen, and the user can say "hello” according to the prompt.
  • the electronic device can perform voiceprint recognition according to the voice data input by the user, acquire the user's voiceprint feature, and store the user's voiceprint feature.
  • the electronic device may continue to prompt the user to input voice data.
  • the electronic device can display "please say, play some music” on the display screen, and the user can say “play some music” according to the prompt. After the registration is completed, the electronic device may display a registration completion prompt on the display screen.
  • the user can input voice data multiple times according to the prompt of the electronic device, so that the electronic device can recognize the voiceprint feature of the user according to the voice data input by the user multiple times.
  • the electronic device may receive the voice data input by the user, perform voiceprint recognition on the voice data, and obtain voiceprint characteristics of the voice data.
  • the electronic device can compare the acquired voiceprint feature with the stored voiceprint feature to determine whether it is the same person. If it's the same person, you can wake up the voice assistant service. There is no way to wake up the voice assistant service if it is not the same person.
  • the electronic device may also prompt the user through the display screen that the voice assistant service has not been awakened, or may prompt the user to re-enter the voice data.
  • multiple users may form a group through their respective user IDs.
  • User 1 can create a group first, and can invite users who want to be invited to join the created group.
  • Multiple users can set up a group in a face-to-face way.
  • Multiple users can enter the same numbers or characters on electronic devices through the face-to-face grouping function.
  • the electronic device can send the user ID and the numbers or characters entered by the user to the server of the voice assistant service, and the server of the voice assistant service can search for the user IDs that input the same numbers or characters at the same time and place, and identify these users.
  • the server of the voice assistant service can notify each user to identify the corresponding electronic device, and the electronic device can display the established group.
  • the user can add new members to the established group. For example, members of a group can invite new members to join the group. Not only that, the group owner who created the group can also remove any group member from the group.
  • the user can share some information in the group, such as location information, user status and other information.
  • group members can share what they are currently doing, or what they will do at a certain time.
  • user A can adjust the user status to working in the group, and other members of the group, such as user B and user C, can know that user A is working.
  • user A can set the user status to do not disturb in the group, and other members in the group, such as user B and user C, can learn that user A does not want to be disturbed now.
  • the voice assistant service of the electronic device may collect information set by the user.
  • the schedule information set by the user or the alarm clock information set by the user can be collected to adjust the user status.
  • the voice assistant service of the electronic device may also collect the user's status information through the sensor of the electronic device.
  • the user's state information can be collected through the camera, audio module, touch sensor and pressure sensor of the electronic device.
  • the electronic device may capture what the user is currently doing through the camera, such as the user is working, writing a homework, or sleeping.
  • the electronic device can also collect the user's voice data through the audio module, and perform text recognition on the voice data to judge the user's state.
  • the electronic device can also collect whether the user is using the electronic device through the touch sensor and the pressure sensor.
  • group members can also share their own location information.
  • user A can share his own location information in the group, and user B and user C in the group can determine the current location of user A through the location information shared by user A, and the current location of user A and the location information of user A. distance from its own location.
  • user B wants to know how to get to the location of user A, he can also enter the navigation function through shortcut keys or voice commands. For example, if user B wants to know how to reach the location of user A, user B can say "Go to user A", and user B's electronic device can receive the voice data and perform text recognition. User B's electronic device can enter the navigation function to find a way to reach user A's location from user B's location according to the recognized voice command.
  • group members can also share information such as photos, videos or files with each other.
  • Each group can have a shared folder.
  • Group members can store the photos, videos or files they want to share in the shared folder. Any group member in the group can view the shared photos in the shared folder. , video or file, etc. Not only that, you can also remind one or some group members to view the shared folder.
  • a method for setting a reminder message for one or some group members by a group member in a group in the embodiment of the present application is described with reference to the accompanying drawings.
  • FIG. 5 it is an exemplary flowchart of a control method of an electronic device in an embodiment of the present application, which may include the following steps:
  • the first electronic device receives an instruction input by the user in the voice assistant.
  • the first electronic device may receive a voice instruction input by the user in the voice assistant, or an instruction input in a manual input manner.
  • the first electronic device may receive a voice command input by the user through the audio module.
  • the first electronic device identifies the user to be reminded from the instruction input by the user.
  • the first electronic device may perform text recognition on an instruction input by the user, and identify the user from the instruction. For example, if the command input by the user is "remind A to view group messages", the first electronic device can perform text recognition on the command, and can recognize that it is A that needs to be reminded, so the first electronic device can determine that the user to be reminded is A.
  • the first electronic device searches the voice assistant group for a user identifier related to the user to be reminded.
  • the user to be reminded here may be a note name set for the user by the user of the first electronic device, or may also be a nickname set by the user himself. For example, if the instruction input by the user is "remind mother to watch TV", the first electronic device can recognize that the user to be reminded is "mom”. The first electronic device may look up the note name and nickname in the voice assistant group to determine the user ID of "Mom”.
  • the first electronic device sends the first message to the second electronic device of the user to be reminded.
  • the first message here may be an instruction received by the first electronic device, or may also be a reminder message generated by the first electronic device according to an instruction input by a user. For example, if the first electronic device receives the instruction "remind mother to watch TV", the first electronic device may send the instruction "remind mother to watch TV” to the second electronic device. Alternatively, the first electronic device may also generate a reminder message according to the instruction, such as "watching TV” and the like. The first electronic device may send the reminder message to the second electronic device.
  • the first electronic device may send the first message to the voice assistant of the user to be reminded through the voice assistant. For example, the first electronic device determines that the user to be reminded is "user A", so the voice assistant of the first electronic device can send the first message to the voice assistant of user A.
  • the first electronic device may send the first message to some or all of the electronic devices associated with the user identification of the user to be reminded through the voice assistant.
  • the first electronic device can determine the usage of the electronic device associated with the user ID of the user to be reminded, and the first electronic device can send an instruction or a reminder message to the electronic device associated with the user ID of the user to be reminded through the voice assistant Electronic equipment being used in the device.
  • the first electronic device can send a request message to obtain whether the user is using the electronic device through the voice assistant to the electronic device associated with the user ID of the user to be reminded.
  • the electronic device associated with the user identification of the user to be reminded can determine whether the user is using the electronic device according to the sensor, camera and/or audio module, and send the obtained result to the first electronic device.
  • the electronic device may determine whether the user is using the target device according to the pressure sensor or the touch sensor. Alternatively, the electronic device may determine whether the user is using the electronic device through the camera. For example, the electronic device can turn on the camera, and when a face is recognized through the camera, it can be determined that the user is using the electronic device. Or the electronic device can determine whether the user is using the electronic device through the audio module. For example, the electronic device can turn on the audio module to determine whether a user speaks, and if a user speaks, it can be considered that the user is using the electronic device.
  • the second electronic device displays a reminder message generated according to the instruction input by the user.
  • the voice assistant of the second electronic device can send the instruction to the second electronic device.
  • the second electronic device may generate a corresponding reminder message according to the instruction.
  • the voice assistant of the second electronic device may generate a corresponding reminder message according to the instruction, and send the reminder message to the second electronic device.
  • user A uploads a picture to a shared folder and wants to remind user B to view the picture. Therefore, user A can input instructions to user A's voice assistant by means of manual input or by means of inputting voice instructions.
  • User A's voice assistant can parse the instruction and generate a corresponding reminder message.
  • User A's voice assistant can find user B from the voice assistant group, and user A's voice assistant can send a reminder message to user B's voice assistant. It is sent by the voice assistant of user B to the electronic device of user B, and the electronic device of user B can display the reminder message through the display screen, as shown in FIG. 6 .
  • the voice assistant of user B can send the reminder message to all or some of the electronic devices among the electronic devices associated with the user ID of user B.
  • the voice assistant of user B can obtain the current usage of the electronic device associated with the user ID of user B, and the voice assistant of user B can send the reminder message to the electronic device in use.
  • the voice assistant of user A may send the instruction input by user A to the voice assistant of user B.
  • the command is parsed by user B's voice assistant to generate a corresponding reminder message.
  • the electronic device A can send the first information or instructions to the electronic device B from time to time, and the electronic device B A reminder message may be set on the electronic device B according to the instruction or the first information.
  • the first information here may include event information and a time point.
  • the electronic device A may save the instruction or the first information in the electronic device A, and when the time point is reached, send the instruction or the first information to the electronic device B, and the electronic device B will remind according to the time point and the event information .
  • the instructions can include setting reminders, calling applications, and controlling peripheral devices.
  • user A can say to the voice assistant of electronic device 1, "Play a birthday song for user B at 0:00 on May 1st". Then the voice assistant of user A can directly send the message to the voice assistant of user B, or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 can set a reminder message to open an application that can play music at 0:00 on May 1 to play the birthday song.
  • user A's voice assistant may store the instruction "play a birthday song for user B at 0:00 on May 1" in electronic device 1.
  • the instruction is sent to user B's voice assistant to user B, or to the electronic device 2 associated with user B's user ID.
  • the electronic device 2 can open an application that can play music according to the instruction to play the birthday song.
  • the electronic device 1 may send the instruction to the voice assistant of the user B or to the electronic device 2 associated with the user ID of the user B in advance. For example, at 23:58 on April 30, the electronic device 1 may send an instruction to the voice assistant of user B, or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 1 reaches 0:00 on May 1, the application that can play music is opened to play the birthday song.
  • a suitable device in the space is selected to play the birthday song. For example, you can choose a public device to play the birthday song.
  • user A can say to the voice assistant of electronic device 1, "When sleeping at night, adjust B's air conditioner to 22 degrees, or adjust user B's air conditioner to a higher temperature".
  • the voice assistant of user A can directly send a message to the voice assistant of user B or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 When the electronic device 2 reaches a preset time point or detects that user B enters a rest state, it will control the air conditioner to adjust the temperature of the air conditioner to 22 degrees, or can control the temperature of the air conditioner to be within a specified higher temperature range.
  • user A's voice assistant may store the instruction "adjust B's air conditioner to 22 degrees when sleeping at night, or adjust user B's air conditioner a little higher" in electronic device 1 .
  • the instruction can be sent to the voice assistant of user B or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 can control the air conditioner to adjust the temperature of the air conditioner to 22 degrees, or can control the temperature of the air conditioner to be within a specified higher temperature range.
  • the electronic device 1 may send the instruction to the voice assistant of the user B or to the electronic device 2 associated with the user ID of the user B at a preset time point and a period of time in advance.
  • a suitable device in one area can be selected) to adjust the temperature of the air conditioner.
  • user A's device, user B's device, or other devices may be selected.
  • the voice assistant of user A may also find the electronic device associated with the user ID of user B in the voice assistant group, and send the input instruction or the generated reminder message to the found electronic device .
  • the voice assistant of user A can determine the usage of the electronic device associated with the user ID of user B, and the voice assistant of user A can send an instruction or a reminder message to the electronic device being used in the electronic device associated with the user ID of user B. equipment.
  • the electronic device A can send a request message for obtaining whether the user is using the electronic device to the electronic device associated with the user ID of the user B through the voice assistant.
  • the electronic device associated with the user ID of user B may determine whether the user is using the electronic device according to the sensor, camera and/or audio module, and send the obtained result to electronic device A.
  • a group member can also set reminder messages for some or all of other group members. For example, group member user A can set a reminder message for user B.
  • user A and user B are not in the same area, and user A can remind user B that "medicine needs to be taken".
  • User A can input relevant instructions to user A's voice assistant to remind user B to take medicine by manual input or by inputting voice instructions.
  • the voice assistant of user A can find the voice assistant of user B in the voice assistant group, and the voice assistant of user A can send the command input by user A or the reminder message generated according to the input command to the voice assistant of user B.
  • the voice assistant of user A may send an instruction or a reminder message to the voice assistant of user B through a mobile communication network or an instant messaging message.
  • User B's voice assistant can generate a corresponding reminder message according to the instruction, and send it to the user B's electronic device.
  • the voice assistant of user A can send the instruction or reminder message to the electronic device associated with the user ID of user B through a mobile data network or an instant messaging message.
  • User B's electronic device may also display the reminder message by ringing, vibrating, or voice, and/or user B's electronic device may also display the reminder message on the display screen.
  • user A and user B are in the same area, user A can remind user B that "there is a meeting at 8 am".
  • User A can input a relevant instruction to user A's voice assistant to remind user B that there is a meeting at 8:00 am by manual input or by inputting a voice instruction.
  • User A's voice assistant can search for user B's voice assistant in the voice assistant group, and user A's voice assistant can send an instruction or a reminder message generated according to the instruction to user B's voice assistant.
  • the voice assistant of user A may send an instruction or a reminder message to the voice assistant of user B through wireless local area network, Bluetooth, mobile communication network or instant messaging.
  • User B's voice assistant can send a reminder message to all or part of the electronic devices associated with User B's user ID.
  • user A's voice assistant may send an instruction or reminder message to all or part of the electronic devices associated with user B's user ID.
  • the voice assistant of user A can send the instruction or reminder message to the electronic device of user B through wireless local area network, bluetooth, mobile communication network or instant messaging.
  • User B's electronic device may also display the reminder message by ringing, vibrating, or voice, and/or user B's electronic device may display the reminder message on the display screen.
  • each member of the group can set a corresponding reminder mode for other members on the electronic device.
  • group member 1 can set a unique ringtone for group member 2
  • group member 1 can set a unique ringtone for group member 3.
  • the reminder message from the group member 2 can be displayed according to the preset ringtone of the group member 2.
  • a daughter wants to remind her mother to take her medicine. Therefore, the daughter can say "remind mother to take medicine" to the electronic device A by inputting a voice command.
  • the electronic device A can receive the voice command through a microphone or a receiver, and perform text recognition.
  • the electronic device A can identify the user who needs to be reminded, that is, "mom", from the voice command.
  • the electronic device A may search for a user related to "Mom" in the voice assistant group. For example, electronic device A may look up a user who is noted as "mom” in the voice assistant group.
  • the electronic device A can send the instruction or the reminder message generated according to the instruction to the found user, that is, the voice assistant of "mother", through the voice assistant.
  • the voice assistant of "Mom” can send an instruction or a reminder message to the electronic device B associated with the user ID of "Mom".
  • the electronic device A may send the instruction or the generated reminder message to the electronic device B associated with the user ID of "Mom” through the voice assistant.
  • the electronic device B can display the reminder message by vibrating and voice prompting "Daughter reminds you to take medicine ⁇ ".
  • the electronic device A can send the first information or instructions to the electronic device B from time to time, and the electronic device B A reminder message may be set on the electronic device B according to the instruction or the first information.
  • the first information here may include event information and a time point.
  • the electronic device A may save the instruction or the first information in the electronic device A, and when the time point is reached, send the instruction or the first information to the electronic device B, and the electronic device B will remind according to the time point and the event information .
  • the instructions can include setting reminders, calling applications, and controlling peripheral devices.
  • user A can say to the voice assistant of electronic device 1, "Play a birthday song for user B at 0:00 on May 1st". Then the voice assistant of user A can directly send the message to the voice assistant of user B, or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 can set a reminder message to open an application that can play music at 0:00 on May 1 to play the birthday song.
  • user A's voice assistant may store the instruction "play a birthday song for user B at 0:00 on May 1" in electronic device 1.
  • the instruction is sent to user B's voice assistant to user B, or to the electronic device 2 associated with user B's user ID.
  • the electronic device 2 can open an application that can play music according to the instruction to play the birthday song.
  • the electronic device 1 may send the instruction to the voice assistant of the user B or to the electronic device 2 associated with the user ID of the user B in advance. For example, at 23:58 on April 30, the electronic device 1 may send an instruction to the voice assistant of user B, or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 1 reaches 0:00 on May 1, the application that can play music is opened to play the birthday song.
  • a suitable device in the space is selected to play the birthday song. For example, you can choose a public device to play the birthday song.
  • user A can say to the voice assistant of electronic device 1, "When sleeping at night, adjust B's air conditioner to 22 degrees, or adjust user B's air conditioner to a higher temperature".
  • the voice assistant of user A can directly send a message to the voice assistant of user B or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 When the electronic device 2 reaches a preset time point or detects that user B enters a rest state, it will control the air conditioner to adjust the temperature of the air conditioner to 22 degrees, or can control the temperature of the air conditioner to be within a specified higher temperature range.
  • user A's voice assistant may store the instruction "adjust B's air conditioner to 22 degrees when sleeping at night, or adjust user B's air conditioner a little higher" in electronic device 1 .
  • the instruction can be sent to the voice assistant of user B or to the electronic device 2 associated with the user ID of user B.
  • the electronic device 2 can control the air conditioner to adjust the temperature of the air conditioner to 22 degrees, or can control the temperature of the air conditioner to be within a specified higher temperature range.
  • the electronic device 1 may send the instruction to the voice assistant of the user B or to the electronic device 2 associated with the user ID of the user B at a preset time point and a period of time in advance.
  • a suitable device in one area can be selected) to adjust the temperature of the air conditioner.
  • user A's device, user B's device, or other devices may be selected.
  • a certain member of the group may also set a reminder message for other group members by making an appointment.
  • a daughter wants to remind her father and mother that there is a family dinner at 7 pm. Therefore, the daughter can input instructions to the electronic device A by means of manual input or input of voice instructions. For example, a daughter could say "Remind Dad and Mom that there is a family dinner at 7 p.m.”.
  • the electronic device A can receive the voice instruction and perform text recognition.
  • the electronic device A can recognize the voice command, and identify the users who need to be reminded from the voice command, that is, "mother” and "daddy".
  • the electronic device A may search for users related to "mom” and “daddy” in the voice assistant group, respectively.
  • the electronic device A may send the voice command or the reminder message generated according to the voice command to the voice assistant group of the found user. That is, the electronic device A can send a voice instruction or a reminder message to the voice assistants of "Mom” and “Dad” respectively.
  • the voice assistant of "Mom” can send a reminder message or instruction to some or all of the electronic devices B in the electronic devices associated with the user ID of "Mom”.
  • "Dad"'s voice assistant can send reminder messages or voice instructions to all or part of the electronic devices C in the electronic devices associated with the "Dad” user ID.
  • the electronic device B and the electronic device C can display the reminder message by ringing a bell.
  • the electronic device B and the electronic device C receive the instruction, they can create a schedule respectively, so as to remind the schedule at 7 o'clock.
  • a certain group member in the group can also customize the schedule for some or all of the other group members.
  • user A can customize Saturday's schedule for user B by manually inputting or inputting voice commands.
  • FIG. 9A user A can say "make a schedule for user B on Saturday", the electronic device A can receive the voice data, and can prompt the user to start making a schedule through the display screen.
  • "please start making a schedule” may be displayed on the display device.
  • User A can specify a schedule for user B according to the prompt of electronic device A. For example, user A can say "get up at 8 am", and electronic device A can recognize the voice data and record the relevant schedule.
  • User A can continue to say "go to a music class at 10 am", and the same electronic device A can continue to recognize the voice data and record the relevant schedule. Repeating the above manner, user A can record the schedule made for user B in electronic device A.
  • Electronic device A can search for user B's voice assistant in the voice assistant group.
  • Electronic device A may send the formulated schedule to user B's voice assistant through the voice assistant.
  • User B's voice assistant can send the schedule made by user A to electronic device B.
  • the electronic device B can display the received schedule on the display screen, and create a schedule on the electronic device B according to the content on the schedule, so that the user B can be reminded.
  • the electronic device A may send the formulated schedule to some or all of the electronic devices B in the electronic devices associated with the user ID of the user B through the voice assistant.
  • the electronic device A can send a request message for obtaining whether the user is using the electronic device to the electronic device associated with the user ID of the user B through the voice assistant.
  • the electronic device associated with the user ID of user B may determine whether the user is using the electronic device according to the sensor, camera and/or audio module, and send the obtained result to electronic device A.
  • the electronic device A may send the formulated schedule to the electronic device being used by the user through the voice assistant.
  • electronic device B may prompt user B through a display screen that a user has made a schedule for it.
  • the electronic device B can display “User A has made a schedule for you, please check it” through the display screen, and user B can choose to check or not to check.
  • user B can also choose whether to accept the schedule made by user A for it.
  • FIG. 9C user B can choose to accept or reject the schedule made by user A by manually inputting or inputting a voice command. If user B accepts the schedule set by user A, electronic device B can create a schedule based on the content on the schedule.
  • electronic device B If user B rejects the schedule set by user A, electronic device B does not need to create a schedule. In addition, the electronic device B can also feed back to the electronic device A whether the user B accepts the set schedule, and the electronic device A can display the user B's selection through the display screen.
  • the voice assistant service can enable more users to participate, realize the interaction between multiple users, and can provide users with the voice assistant service more conveniently.
  • an exemplary flowchart of a control method for an electronic device includes the following steps.
  • the electronic device receives a control instruction input by a user.
  • the electronic device has a user identification, the user identification can be used to identify the user's identity information, and the user identification can be used to log in the voice assistant service.
  • the electronic device can wake up the voice assistant service first.
  • the user can enter preset voice data with specified text content to wake up the voice assistant service.
  • the voice assistant service can instruct the user to input a control command through the display screen.
  • the user can input a control instruction of "playing music” to the electronic device by means of manual input. Alternatively, the user can say "play music” and input voice control commands to the electronic device.
  • the electronic device may receive the voice data input by the user through the receiver 170B or the microphone 170C as shown in FIG. 2 .
  • the electronic device can perform text recognition on the voice data to obtain control instructions.
  • the display device may prompt the user to re-input the voice data. For example, because the external environment is noisy and the user's voice is low, the electronic device does not receive the voice data input by the user. Referring to FIG. 11A , the electronic device may display a prompt message such as "What did you say, I didn't hear it" through the display device, prompting the user to re-input the voice data.
  • 1002 Acquire at least one target device of the electronic device in an area. It can be understood as acquiring other devices adjacent to the electronic device, and other devices may be in the same area as the electronic device.
  • an area can be a concept of space, and can refer to an interior. For example, it can be in an office, or it can be in a house, etc.
  • an area may also refer to a range in which short-distance communication can communicate. For example, it may be a range that supports Bluetooth communication, a range that supports zigbee communication, and the like.
  • An area can also refer to an area where electronic devices can connect to the same gateway device. For example, it can be an area connected to the same wireless local area network (WLAN), or it can be an area connected to the same wireless access point (AP).
  • WLAN wireless local area network
  • AP wireless access point
  • the electronic device can search for a Bluetooth device.
  • the Bluetooth device searched by the electronic device can be used as the aforementioned at least one target device.
  • the electronic device may receive device information sent from the connected gateway device.
  • the device information may be information of devices communicatively connected to the gateway, so the device represented by the device information serves as the aforementioned at least one target device.
  • the target device may be a device in the same area as the electronic device and in the same voice assistant service group as the electronic device.
  • the electronic device may also determine at least one target device through the location information shared by the group members in the voice assistant service group. For example, in the voice assistant service group, the electronic device can determine the distance between the position of the group member and its own position according to the position information shared by the group member. Among the multiple calculated distances, a distance less than or equal to a specified value is determined, and a device corresponding to the distance is used as at least one target device.
  • the electronic device sends a request message for acquiring the current user state to at least one target device.
  • the request message may carry the user identifier of the electronic device.
  • the electronic device may send a request message for acquiring the current user state to at least one target device through a mobile communication network, a wireless local area network, or a Bluetooth device.
  • the electronic device may also forward the request message for obtaining the current user state to at least one target device through the third-party device.
  • the third-party device here may be the same gateway device to which the electronic device is connected to at least one target device, or may also be a server of the voice assistant service that the electronic device has logged in to.
  • the electronic device and the at least one target device may be in the same voice assistant service group.
  • the electronic device may send a request message for acquiring the current user state to a server of the voice assistant service, and the server may send a request message for acquiring the current user state to at least one voice assistant service logged in by the target device, and the voice assistant service logged in by the target device may send a request message for acquiring the current user state.
  • the server of the voice assistant service to which the electronic device has logged in may send the request message to at least one target device. For example, it can be sent to at least one target device through a mobile communication network or a wireless local area network.
  • the current user status may refer to whether there is a user within the range that can be monitored by the target device, and if there is a user, what the user is currently doing. For example, there is a user within the range monitored by the target device, and the user is sleeping, or studying, and so on.
  • the current user state can also be the external environment required by the user. For example, the user needs the external environment to be quiet, or the user has no requirements on the noise level of the external environment, and so on. If there is no user within the range that the target device can monitor, the current user state may be that there is no user.
  • the target device may determine whether the user is using the target device according to a pressure sensor or a touch sensor. For example, if the user is using the target device, it can be considered that there is a user, and the user has no requirement on the noise level of the external environment.
  • the target device may determine the current user state through a camera. For example, the target device can turn on the camera to determine whether there is a user; if there is a user, whether the user is working, studying, or sleeping.
  • the target device may determine the current user state through the audio module. For example, the target device can turn on the audio module to determine whether there is a user. The target device can turn on the microphone to determine whether there is a user speaking, if there is a user speaking, it can be considered that there is a user, and if there is no user speaking, it can be considered that there is no user.
  • the target device can determine the current user state through a pressure sensor or a touch sensor, a camera and an audio module. For example, the target device can turn on the camera and audio module. The target device determines that there is a user through the camera, but the audio module determines that there is no user input voice data. Therefore, the target device can think that the user is currently working, studying or sleeping, and the user needs the external environment to be quiet.
  • the target device can turn on the camera and microphone.
  • the target device determines that there is a face through the camera, so it can be considered that there is a user.
  • the target device determines through the microphone that the user has not input voice data, so the target device may consider that the current user is working, studying or sleeping. Therefore, the target device can determine that the current user state is that there is a user, and the user is studying, working or sleeping, that is, the user needs the external environment to be quiet.
  • the target device can obtain the authorization of the user before acquiring the current user state.
  • the target device can obtain the user's authorization before obtaining the current user state each time, and after obtaining the user's authorization, can obtain the current user state through the above method.
  • the target device may obtain the user's authorization before obtaining the current user state for the first time. After obtaining the user's authorization, the target device may obtain the user's authorization by default each time it obtains the current user state.
  • the target device may be a mobile phone, a tablet computer, a wearable device (for example, a watch, a wristband, a smart helmet, etc.), a vehicle-mounted device, a smart home, augmented reality (AR)/virtual reality (virtual reality, VR) devices, notebook computers, ultra-mobile personal computers (ultra-mobile personal computers, UMPC), netbooks, personal digital assistants (personal digital assistants, PDA) and so on.
  • AR augmented reality
  • VR virtual reality
  • notebook computers ultra-mobile personal computers (ultra-mobile personal computers, UMPC), netbooks
  • personal digital assistants personal digital assistants, PDA
  • the electronic device may determine at least one target device within an area. Among them, the electronic device determines that the target devices in an area include the camera A and the mobile phone B. The electronic device can send a request message to the camera A and the mobile phone B to obtain the current user status. Therefore, the camera A determines whether there is a user and what the user is doing within the range that can be scanned. The mobile phone B can obtain the current user status through at least one of the sensor, the camera and the audio module. The camera A and the mobile phone B can respectively send the acquired current user status to the electronic device A.
  • the at least one target device sends the acquired at least one current user state to the electronic device.
  • the target device may send the acquired at least one current user state to the electronic device through a mobile communication network, a wireless local area network or a Bluetooth device.
  • the target device may also forward the acquired at least one current user state to the electronic device through the third-party device.
  • the third-party device here may be the same gateway device where the target device and the electronic device are connected, or may also be a server of the voice assistant service that the target device has logged in to.
  • the at least one target device forwards the acquired at least one current user state to the electronic device through the server of the registered voice assistant service
  • the at least one target device and the electronic device may be in the same voice assistant service group.
  • At least one target device can send the obtained at least one current user state to the server of the voice assistant service
  • the server can send the obtained at least one current user state to the voice assistant service logged in by the electronic device
  • the voice assistant service logged in by the electronic device can send the obtained at least one current user state.
  • the assistant service sends at least one current user state to the electronic device.
  • the server of the voice assistant service to which at least one target device has logged in may send at least one current user status to the electronic device. For example, it can be sent to the electronic device through a mobile communication network or a wireless local area network.
  • the electronic device executes a control instruction according to at least one current user state.
  • the electronic device can control the volume within a specified volume range when executing the control instruction. If there is no current user state in which the user is studying, working or sleeping in at least one current user state, that is, there is no current user state in which the user needs the external environment to be quiet, the electronic device may determine at least one peripheral device in the current network connection when executing the control instruction , and can execute control instructions through at least one peripheral device.
  • the specified volume range here may be preset, for example, may be volume 5-volume 10, etc., which is not specifically limited in this application.
  • the control instruction is "play some music”
  • the electronic device can control the volume to a specified volume when playing music. within the range. If the at least one current user state received by the electronic device does not exist when the user is studying, working or sleeping, the electronic device may search for peripheral devices currently connected to the network. If the electronic device finds that an audio device, such as a Bluetooth speaker, exists in the current network connection, the electronic device can play music through the audio device.
  • an audio device such as a Bluetooth speaker
  • the electronic device may execute the control instruction. For example, when the control command is "play some music" and there is no user studying, working or sleeping in at least one current user state received by the electronic device, and the electronic device does not find any peripheral devices in the current network connection, then The electronic device can open an application that can play music, and play the music. Wherein, when the electronic device plays music, the volume may be greater than the aforementioned specified volume range.
  • the target device can also receive voice data from the user.
  • voice control instruction of "play some music” to electronic device 1
  • electronic device 1 obtains a current user status from target device 2 through the above steps 1002-1004.
  • user B is working, studying or sleeping, so the electronic device 1 can open an application program that can play music, and control the volume within a specified volume range.
  • user B can input voice data, such as "I'm studying" or "be quiet", to the target device 2, and prompt user A to lower the volume.
  • the target device 2 can perform text recognition.
  • the target device 2 can send the recognition result to the electronic device 1, so that the electronic device 1 can reduce the volume according to the recognition result.
  • the volume reduced by the electronic device 1 may be preset.
  • user A wants to play music in the living room, so he can wake up the electronic device 1 of user A, and input a voice control instruction, such as "play some music".
  • the electronic device 1 may first determine at least one target device according to the voice instruction before performing the operation of playing music.
  • the electronic device 1 can determine at least one target device through the connected gateway device 2 .
  • the electronic device 1 receives the device information that is sent from the gateway device 2 in communication with the gateway device 2 .
  • the electronic device 1 can determine that the target device 3 and the target device 4 exist according to the device information.
  • the electronic device 1 may send a request message for acquiring the current user state to the target device 3 and the target device 4 through the gateway device 2 .
  • the target device 3 and the target device 4 receive the aforementioned request message, and turn on the camera and/or the audio module to obtain the current user status.
  • the target device 3 determines that there is a user and the user is working, and the target device 4 determines that there is no user.
  • the target device 3 and the target device 4 send the acquired current user state to the electronic device 1 through the gateway device 2 .
  • the electronic device 1 determines that there is a state in which the user is working in the current user state, so the electronic device 1 can open the application for playing music and control the volume in a lower range (eg, the volume is 10) according to the working state of the user. .
  • the target device 3 receives the voice instruction of user B, for example, the voice instruction may be "be quiet".
  • the target device 3 can send the voice command or the reminder message generated according to the voice command to all electronic devices in the area to which it belongs.
  • the target device 3 first determines whether it is running a media service, and if it is running a media service, it can respond to the voice command and reduce the volume of the media service. If the user is not running the media service, the voice command or the reminder message generated according to the voice command can be sent to all electronic devices in the area to which it belongs.
  • the target device 3 may determine whether the current user state has been sent within a period of time before receiving the voice instruction. If the target device 3 has sent the current user state, the voice command or the reminder message generated according to the voice command can be sent to the device that receives the aforementioned current user state. For example, the target device 3 has sent the current user status to the electronic device 1 within a period of time before receiving the voice command, so the target device 3 can send the voice command or the reminder message to the electronic device 1 .
  • the target device 3 can determine whether other devices in the area to which it belongs are running media services. For example, the target device 3 may send a request message for acquiring the current device state to other devices in the area to which it belongs. Other devices can determine whether they are currently running media services, and send the acquired current device state to the target device 3 . Optionally, other devices may send a response message to the target device 3 when they are running the media service, so as to inform the target device 3 that the media service is currently running. The target device 3 may send the voice instruction or a reminder message generated according to the voice instruction to other devices currently running the media service.
  • the electronic device 1 After receiving the voice command or the reminder message corresponding to the voice command, the electronic device 1 can lower the volume of the current music, eg, to 5, in response to the voice command or the reminder message.
  • the electronic device 1 may also display a prompt message such as "the current user B needs to lower the volume of the music" through the display screen.
  • the electronic device 1 may also seek the consent of the user A before lowering the volume of the music. For example, “User B needs to lower the volume, agree or not” may be displayed on the display screen, and when the user A agrees, the music the volume is decreased.
  • the user A can input an instruction of whether to agree to the electronic device 1 by manual input or by inputting a voice instruction.
  • the electronic device when it executes the control instruction, it can also acquire at least one control device in the current network connection.
  • the control device here may be a device for controlling smart home appliances or car machines.
  • the electronic device can send a control command to the control device, so that the control device can control the smart home appliance or the vehicle according to the control command.
  • user A wants to watch a movie in the living room, so he can wake up the electronic device 1 and input a voice control command, such as "play a movie".
  • the electronic device 1 determines the target device 3 through the connected gateway device 2 .
  • the electronic device 1 may send a request message to the target device 3 to obtain the current user state.
  • the target device 3 can receive the request message and turn on the camera and audio device.
  • the target device 3 determines that there is no user, and sends the current user status to the electronic device 1 .
  • the electronic device 1 can determine that there is no current user state in which the user is studying, working or sleeping in the acquired current user state, so the electronic device 1 can play the movie on the large-screen device through the large-screen device currently connected to the network.
  • the electronic device 1 may display on the display screen a prompt message that there is a large-screen device currently connected to the network, and the large-screen device is being used to play a movie.
  • the electronic device 1 displays on the display screen that there is a large-screen device in the current network connection, and a request message whether to use the large-screen device to play a movie can be played through the large-screen device after the user agrees.
  • the user can manually input or voice input the instruction of whether or not to agree.
  • the scene may further include at least one control device, such as control device 4 and control device 5 as shown in FIG. 14B .
  • the electronic device 1 can determine that a darker environment is required to play a movie, so the electronic device 1 can send a control command to close the curtains to the control device 4 and a control command to turn off the lights to the control device 5 . Therefore, the control device 4 can close the curtains according to the control command of the electronic device 1 , and the control device 5 can turn off the lighting according to the control command of the electronic device 1 .
  • the electronic device 1 may determine whether the user A agrees or not before sending the control instruction to the control device 4 and the control device 5 .
  • the electronic device 1 may display prompt messages "whether to close the curtains" and “whether to close the lights” on the display screen.
  • User A can manually input or voice input an instruction whether to agree or not, and the electronic device 1 can send a control instruction to the control device 4 and the control device 5 after the user inputs the agreed instruction.
  • the target device 3 receives the incoming call request. Since there is a user who needs to answer the incoming call request from the target device 3, the target device 3 can send the current user status to the electronic device 1, indicating that the current user needs the external environment to be quiet.
  • the electronic device 1 can reduce the volume of the currently playing media (in the scenario of FIG. 14C , the volume of the TV currently playing the media can be lowered) to a predetermined range according to the current user status sent by the target device 3 .
  • user A wants to watch a movie in the living room, so he can wake up the electronic device 1 and input a voice control command, such as "play a movie".
  • the electronic device 1 can determine that the target device 3 and the target device 4 exist in the region to which it belongs. Among them, the target device 3 determines through the camera that there are one or more users in front of the large-screen device, and the target device 3 can determine that one or more users are waiting to watch a movie, so it can send a message to the electronic device 1 that one or more users are waiting to watch a movie. The current user state of the movie.
  • the electronic device 1 can play a movie through the large-screen device currently connected to the network.
  • the above-mentioned target device 3 may be a large-screen device, and the large-screen device can determine whether there are one or more faces through the camera, or whether one or more users are looking at the large-screen device through the camera. If the large-screen device determines that there are one or more human faces, or the large-screen device determines that one or more users are looking at the large-screen device, the large-screen device can determine that one or more users are currently waiting to watch a movie.
  • the user wants to play music in the car, so he can wake up the electronic device 1 and input a voice control command.
  • the electronic device 1 can determine that no user is currently studying, working or sleeping, so it can open an application that can play music, and can play music through the public device 2 in the car. Assuming that the electronic device 1 receives the incoming call request, the electronic device 1 can determine that a relatively quiet environment is required if it wants to answer the incoming call request, so the electronic device 1 can send a control command to close the car window to the vehicle equipment device 3 .
  • the vehicle-mounted device 3 can close the vehicle window according to the control instruction sent by the electronic device 1 .
  • the electronic device 1 may display the control instruction of “whether to close the window” on the display screen before sending the control instruction for closing the vehicle window to the vehicle equipment 3, and after the user inputs the agreed instruction, send the control instruction. To the car machine equipment 3.
  • the electronic device may include: one or more processors 1601 ; one or more memories 1602 and one or more transceivers 1603 ; Wherein, the one or more memories 1602 store one or more computer programs, and the one or more computer programs include instructions. Exemplarily, a processor 1601 and a memory 1602 are shown in FIG. 16 .
  • the electronic device 1600 is caused to perform the following steps: receiving a voice instruction input by the user through the voice assistant on the electronic device; the current user state of the user; and responding to the voice instruction according to the current user state of the at least one user.
  • the processor 1601 may specifically perform the following steps: determine at least one target device in the area to which it belongs; send a first request message to the at least one target device through the transceiver 1603, where the first request message is used for Get the current user state.
  • the transceiver 1603 receives at least one current user status from at least one target device.
  • the processor 1601 may specifically perform the following steps: if there is a first user state in the at least one current user state, execute the operation corresponding to the voice command; the first user state represents user requirements or if the first user state does not exist in the at least one current user state, search for at least one peripheral device in the current network connection; execute the voice command corresponding to the at least one peripheral device through the at least one peripheral device operate.
  • the at least one target device has a target user identification
  • the electronic device has a user identification; the user identification and the target user identification are in the same voice assistant group.
  • the processor 1601 may specifically perform the following steps: in response to the voice instruction, generate the first information; the voice instruction includes event information and a time point.
  • the transceiver 1603 sends the first information to the at least one target device.
  • the instructions when executed by the one or more processors 1601, cause the electronic device 1600 to perform the following steps: receive a first request message from a first electronic device, the first request The message is used by the first electronic device to obtain the current user state; obtain the current user state; and send the current user state to the first electronic device.
  • the processor 1601 may specifically perform the following steps: acquiring the current user state by using a sensor; and/or acquiring the user's setting information to acquire the current user state.
  • the processor 1601 may specifically perform the following steps: receiving first information through the transceiver 1603; the first information includes event information and a time point. The event information is displayed according to the time point.
  • the electronic device 1600 when the instructions are executed by the one or more processors 1601, the electronic device 1600 is caused to perform the following steps: receiving a voice instruction input by a user through a voice assistant on the electronic device; responding to The voice command is to send the voice command to a second electronic device; the electronic device has a first user identification, and the second electronic device has a second user identification; the first user identification and the second user identification User IDs are in the same voice assistant group.
  • the processor 1601 may specifically perform the following steps: in response to the voice command, generate a corresponding first message; the first message includes event information and a time point; send the first message through the transceiver 1603 A message is sent to the second electronic device.
  • the transceiver 1603 sends the voice instruction to the voice assistant corresponding to the second user identification through the voice assistant on the electronic device.
  • the instructions when executed by the one or more processors 1601, cause the electronic device 1600 to perform the following steps: receive voice instructions from the first electronic device through the transceiver 1603; according to the The voice instruction generates a first message, the first message includes event information and a time point; the event information is displayed according to the time point. or
  • a first message from the first electronic device is received through the transceiver 1603, the first message includes event information and a time point.
  • the event information is displayed according to the time point.
  • the first electronic device has a first user identification, and the electronic device has a second user identification; the first user identification and the second user identification are in the same voice assistant group.
  • transceiver 1603 receives the first message from the voice assistant of the first electronic device through the voice assistant.
  • each functional unit in this embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the first acquisition unit and the second acquisition unit may be the same unit or different units.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the term “when” may be interpreted to mean “if” or “after” or “in response to determining" or “in response to detecting" depending on the context.
  • the phrases “on determining" or “if detecting (the stated condition or event)” can be interpreted to mean “if determining" or “in response to determining" or “on detecting (the stated condition or event)” or “in response to the detection of (the stated condition or event)”.
  • the above-mentioned embodiments it may be implemented in whole or in part by software, hardware, firmware or any combination thereof.
  • software it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state drives), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供一种电子设备的控制方法和装置,涉及智能终端技术领域。该方法中,电子设备可以接收用户在语音助手中输入的语音指令。电子设备可以确定自身所属区域内的至少一个用户的当前用户状态。电子设备可以根据至少一个用户的当前用户状态,响应输入的语音指令。基于上述方案,电子设备可以在接收到语音指令时,确定自身所属区域内的至少一个用户的当前用户状态,并可以根据获取到的当前用户状态响应输入的语音指令,因此可以考虑到更多用户的需求,使得语音助手可以更加智能的为用户提供服务,提高了语音助手的性能。该方法可以应用于人工智能领域,与语音助手相关。

Description

一种电子设备的控制方法和装置
相关申请的交叉引用
本申请要求在2020年10月31日提交中国专利局、申请号为202011198245.1、申请名称为“一种电子设备的控制方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及智能终端技术领域,尤其涉及一种电子设备的控制方法和装置。
背景技术
目前,电子设备可以通过智能对话和即时问答与用户智能交互,可以帮助用户解决问题,为用户提供智能且便利的语音助手服务。但是,当前电子设备的语音助手服务只能考虑到用户一个人的需求,并不能充分考虑到用户所处的环境。因此,当前的语音助手服务不够智能,不能满足多人的需求。
发明内容
本申请涉及一种电子设备的控制方法和装置,以提高语音助手服务的性能。
第一方面,本申请实施例提供一种电子设备的控制方法。该方法可以由本申请实施例提供的电子设备执行,或者由类似电子设备功能的芯片执行。该方法中,电子设备可以接收用户通过所述电子设备上的语音助手输入的语音指令。电子设备可以确定自身所属区域内的至少一个用户的当前用户状态。电子设备可以根据至少一个用户的当前用户状态,响应输入的语音指令。
基于上述方案,电子设备可以在接收到语音指令时,确定自身所属区域内的至少一个用户的当前用户状态,并可以根据获取到的当前用户状态响应输入的语音指令,因此可以考虑到更多用户的需求,使得语音助手可以更加智能的为用户提供服务,提高了语音助手的性能。
在一种可能的实现方式中,电子设备确定所属区域内的至少一个用户的当前用户状态时,可以确定所属区域内的至少一个目标设备。电子设备可以向至少一个目标设备发送第一请求消息。该第一请求消息可以用于获取当前用户状态。至少一个目标设备可以在能够监控到的范围内获取当前用户状态,并发送给电子设备。电子设备可以接收来自至少一个目标设备的至少一个当前用户状态。
基于上述方案,电子设备可以确定所属区域内的至少一个目标设备,并通过与至少一个目标设备的通信,获取到至少一个用户的当前用户状态。
在一种可能的实现方式中,若至少一个当前用户状态中存在第一用户状态,则电子设备可以执行所述语音指令所对应的操作。这里的第一用户状态表示用户需求的噪声环境。若至少一个当前用户状态中不存在第一用户状态,则电子设备可以查找当前网络连接中的至少一个周边设备。电子设备可以通过至少一个周边设备执行所述语音指令所对应的操作。
基于上述方案,电子设备可以根据用户需求的噪声环境,选择不同的方式执行输入的语音指令,使得语音助手更加的智能,考虑到了更多人的需求。
在一种可能的实现方式中,至少一个目标设备具有目标用户标识,电子设备具有用户标识。这里的用户标识和目标用户标识在同一个语音助手群组中。
基于上述方案,可以通过用户标识将不同用户的设备添加进同一个语音助手群组中,从而可以通过语音助手群组使得用户之间的通信更加便捷。
在一种可能的实现方式中,电子设备可以响应于所述语音指令,生成第一信息。这里的语音指令包含事件信息和时间点。因此,第一信息中也可以包含事件信息和时间点。电子设备可以将第一信息发送给至少一个目标设备。
基于上述方案,电子设备可以通过语音助手群组将为其他用户设置的提醒消息发送给至少一个目标设备,使得语音助手更加智能。
第二方面,本申请提供一种第一电子设备的控制方法。该方法可以由本申请提供的电子设备执行,或者类似于电子设备功能的芯片执行。该方法中,电子设备可以接收来自第一电子设备的第一请求消息。该第一请求消息可以用于第一电子设备获取当前用户状态。电子设备可以获取当前用户状态,并将当前用户状态发送给第一电子设备。
基于上述方案,电子设备可以根据第一电子设备的请求消息,获取当前用户状态并发送给第一电子设备,可以使第一电子设备能够根据当前用户状态执行用户输入的语音指令,可以使语音助手服务考虑到更多人的需求,提高语音助手服务的性能。
在一种可能的实现方式中,电子设备可以采用传感器获取当前用户状态,和/或采集用户的设置信息,获取当前用户状态。
基于上述方案,电子设备可以根据传感器或者用户设置的信息快速且便捷的获取到当前用户状态。
在一种可能的实现方式中,至少一个电子设备具有目标用户标识,第一电子设备具有用户标识。这里的用户标识和目标用户标识在同一个语音助手群组中。
基于上述方案,可以通过用户标识将不同用户的设备添加进同一个语音助手群组中,从而可以通过语音助手群组使得用户之间的通信更加便捷。
在一种可能的实现方式中,电子设备可以接收第一信息。该第一信息中可以包含事件信息和时间点。电子设备可以根据该时间点显示事件信息。
基于上述方案,电子设备可以接收来自其他用户为自身设置的提醒消息,并在提醒时间点显示提醒消息。
第三方面,本申请实施例提供一种电子设备的控制方法。该方法可以由本申请实施例提供的电子设备执行,或者由类似电子设备功能的芯片执行。该方法中,电子设备可以接收用户接收通过语音助手输入的语音指令;电子设备可以响应所述语音指令,将所述语音指令发送给第二电子设备;所述电子设备具有第一用户标识,所述第二电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
基于上述方案,电子设备可以通过语音助手群组为群组中其他的用户生成提醒消息,可以不同的用户可以通过语音助手群组实现通信,使语音助手服务更加智能。
在一种可能的实现方式中,电子设备可以响应所述语音指令,生成对应的第一消息;所述第一消息可以包含事件信息和时间点;电子设备可以将所述第一消息发送给所述第二电子设备,以便第二电子设备可以根据时间点显示时间信息。
基于上述方案,电子设备可以根据用户输入的语音指令生成相应的提醒消息,发送给语音助手群组中其他的用户,以便其他用户可以收到提醒消息。
在一种可能的实现方式中,电子设备可以通过电子设备上的语音助手将所述语音指令,发送给对应所述第二用户标识的语音助手。
基于上述方案,电子设备可以通过语音助手将语音指令发送给语音助手群组中其他用户的语音助手,可以安全又快捷的为其他用户设置提醒消息。
第四方面,本申请实施例提供一种电子设备的控制方法。该方法可以由本申请实施例提供的电子设备执行,或者由类似电子设备功能的芯片执行。该方法中,电子设备可以接收来自第一电子设备的语音指令,电子设备可以根据所述语音指令生成第一消息。这里的第一消息可以包含事件信息和时间点;所述电子设备可以根据所述时间点显示所述事件信息;或者
电子设备可以接收来自所述第一电子设备的所述第一消息。这里的第一消息可以包含事件信息和时间点。电子设备可以根据所述时间点显示所述事件信息;所述第一电子设备具有第一用户标识,所述电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识可以在同一个语音助手群组中。
基于上述方案,不同的用户可以通过语音助手群组为该群组中其他的用户设置提醒消息,用户在接收到提醒消息后,可以在到达提醒时间点时对用户进行提醒,可以使得语音助手服务更加智能。
在一种可能的实现方式中,电子设备可以通过语音助手接收来自所述第一电子设备的语音助手的所述第一消息。
基于上述方案,电子设备可以通过语音助手接收其他用户为自己设置的提醒消息,可以安全又快捷的接收到该提醒消息。
第五方面,本申请实施例提供一种芯片,该芯片与电子设备中的存储器耦合,用于调用存储器中存储的计算机程序并执行本申请实施例第一方面及其第一方面任一可能设计的技术方案或者执行上述第二方面及其第二方面中的任一种可能的实现方式中的技术方案或者执行上述第三方面及其第三方面中的任一种可能的实现方式中的技术方案或者执行上述第四方面及其第四方面中的任一种可能的实现方式中的技术方案。本申请实施例中“耦合”是指两个部件彼此直接或间接地结合。
第六方面,本申请实施例还提供了一种电路系统。该电路系统可以是一个或多个芯片,比如,片上系统(system-on-a-chip,SoC)。该电路系统包括:至少一个处理电路;所述至少一个处理电路,用于执行上述第一方面及其第一方面中的任一种可能的实现方式中的技术方案或者执行上述第二方面及其第二方面中的任一种可能的实现方式中的技术方案或者执行上述第三方面及其第三方面中的任一种可能的实现方式中的技术方案或者执行上述第四方面及其第四方面中的任一种可能的实现方式中的技术方案。
第七方面,本申请实施例还提供了一种电子设备,所述电子设备包括执行上述第一方面或者第一方面的任意一种可能的实现方式的模块/单元;或者所述电子设备包括执行上述第二方面或者第二方面的任意一种可能的实现方式的模块/单元;或者所述电子设备包括执行上述第三方面及其第三方面中的任一种可能的实现方式的模块/单元;或者所述电子设备包括执行上述第四方面及其第四方面中的任一种可能的实现方式的模块/单元。这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第八方面,本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行本申请实施例第一方面及其第一方面人一种可能的实现方式的技术方案或者执行本申请实施例第二方面及其第二方面任一中可能的实现方式的技术方案或者执行本申请实施例第三方面及其第三方面中的任一种可能的实现方式中的技术方案或者执行本申请实施例第四方面及其第四方面中的任一种可能的实现方式中的技术方案。
第九方面,本申请实施例的中一种程序产品,包括指令,当所述程序产品在电子设备上运行时,使得所述电子设备执行本申请实施例第一方面及其第一方面任一可能实现的方式中的技术方案或者执行本申请实施例第二方面及其第二方面任一可能实现的方式中的技术方案或者执行本申请实施例第三方面及其第三方面中的任一种可能的实现方式中的技术方案或者执行本申请实施例第四方面及其第四方面中的任一种可能的实现方式中的技术方案。
此外,第五方面至第九方面的有益效果可以参见第一方面和第四方面的有益效果,此处不再赘述。
附图说明
图1A为本申请实施例提供的电子设备的语音助手的示意图之一;
图1B为本申请实施例提供的电子设备的语音助手的示意图之一;
图2为本申请实施例提供的电子设备的硬件结构示意图;
图3为本申请实施例提供的电子设备的软件结构示意图;
图4A为本申请实施例提供的设置用户状态的显示界面示意图;
图4B为本申请实施例提供的用户共享位置信息的显示界面示意图;
图5为本申请实施提供的电子设备的控制方法的示例性流程图之一;
图6为本申请实施例提供的语音助手群组的功能示意图之一;
图7为本申请实施例提供的语音助手群组的功能示意图之一;
图8为本申请实施例提供的语音助手群组的功能示意图之一;
图9A为本申请实施例提供的语音助手群组的功能示意图之一;
图9B为本申请实施例提供的语音助手群组的功能示意图之一;
图9C为本申请实施例提供的语音助手群组的功能示意图之一;
图10为本申请实施例提供的电子设备的控制方法的示例性流程图之一;
图11A为本申请实施例提供的电子设备的语音助手的功能示意图之一;
图11B为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图12为本申请实施例提供的确定同一区域内的目标设备的方法示意图;
图13A为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图13B为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图14A为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图14B为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图14C为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图14D为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图15为本申请实施例提供的电子设备的控制方法的场景示意图之一;
图16为本申请实施例提供的电子设备的框图。
具体实施方式
下面将结合本申请以下实施例中的附图,对本申请实施例中的技术方案进行详尽描述。
目前,电子设备可以通过智能对话和即时问答与用户智能交互,可以帮助用户解决问题,为用户提供智能且便利的语音助手服务。参阅图1A,用户通过语音助手服务为自己制定日程。比如,用户可以说“上午7点有会议”,电子设备可以接收用户的语音数据,并进行文本识别。电子设备可以根据识别到的内容,创建一个日程,即“7点有会议”,从而可以在7点时提醒用户。
参阅图1B,用户想要听音乐时,可以说“播放音乐”。电子设备可以识别用户的语音,获取相关的指令,即播放音乐的指令。此时,电子设备可以开启能够播放音乐的应用程序,并播放音乐。
但是,当前电子设备的语音助手服务只能考虑到用户一个人的需求,而不能实现多个用户的互动。此外,当前电子设备的语音助手服务也无法考虑到当前用户所处的环境。例如,用户A在家里想要听音乐,而此时用户B在家里学习需要一个安静的环境。但是,电子设备在识别到用户A的语音“播放音乐”时,不会考虑到用户B的需求,仍然会开启能够播放音乐的应用程序,并播放音乐。在一些实施例中,电子设备如果连接有外放设备,还可以通过外放设备播放音乐。此时,用户A考虑到用户B需要一个较为安静的环境,可以手动操作调节音量,将音量降低以避免影响用户B。
基于上述技术问题,本申请实施例提供一种电子设备的控制方法,以避免上述存在的问题,使得语音助手服务可以满足多个用户的需求,可以实现多个用户之间的互动,也可以充分考虑电子设备所处的环境,更加智能的为用户提供服务。本申请实施例提供了一种电子设备的控制方法,该方法可以适用于任何电子设备,例如具有曲面屏、全面屏、折叠屏等的电子设备。电子设备诸如手机、平板电脑、可穿戴设备(例如,手表、手环、智能头盔等)、车载设备、智能家居、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等。
在本申请实施例中,电子设备可以在接收到用户输入的语音指令时,通过传感器确定当前所处的环境,从而可以选择合适的方式执行用户的语音指令。因此,可以使得语音助手服务考虑到多个用户的需求,可以更加智能的为用户提供服务。
以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括例如“一个或多个”这种表达形式,除非其上下文中明确地有相反指示。还应当理解,在本申请实施例中,“一个或多个”是指一个、两个或两个以上;“和/或”,描述关联对象的关联关系,表示可以存在三种关系;例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A、B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外 一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请实施例涉及的至少一个,包括一个或者多个;其中,多个是指大于或者等于两个。另外,需要理解的是,在本申请的描述中,“第一”、“第二”等词汇,仅用于区分描述的目的,而不能理解为指示或暗示相对重要性,也不能理解为指示或暗示顺序。
以下实施例中以手机为例进行介绍。其中,手机中可以安装各种应用程序(application,app),可以简称应用,为能够实现某项或多项特定功能的软件程序。通常,电子设备中可以安装多个应用,例如,即时通讯类应用、视频类应用、音频类应用、图像拍摄类应用等等。其中,即时通信类应用,例如可以包括短信应用、微信(WeChat)、WhatsApp Messenger、连我(Line)、照片分享(instagram)、Kakao Talk、钉钉等。图像拍摄类应用,例如可以包括相机应用(系统相机或第三方相机应用)。视频类应用,例如可以包括Youtube、Twitter、抖音、爱奇艺,腾讯视频等等。音频类应用,例如可以包括酷狗音乐、虾米、QQ音乐等等。以下实施例中提到的应用,可以是电子设备出厂时已安装的应用,也可以是用户在使用电子设备的过程中从网络下载或其他电子设备获取的应用。
参见图2所示,为本申请一实施例提供的电子设备的结构示意图。如图2所示,电子设备可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信 模块160,调制解调处理器以及基带处理器等实现。天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
显示屏194用于显示应用的显示界面,例如相机应用的取景界面等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,以及至少一个应用程序(例如爱奇艺应用,微信应用等)的软件代码等。存储数据区可存储电子设备100使用过程中所产生的数据(例如拍摄的图像、录制的视频等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将图片,视频等文件保存在外部存储卡中。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
其中,传感器模块180可以包括压力传感器180A,触摸传感器180K,环境光传感器180L等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。
环境光传感器180L用于感知环境光亮度。电子设备可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备是否在口袋里,以防误触。指纹传感器180H用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应 用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现与电子设备100的接触和分离。
可以理解的是,图2所示的部件并不构成对手机的具体限定,手机还可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。以下的实施例中,以图2所示的电子设备为例进行介绍。
图3示出了本申请一实施例提供的电子设备的软件结构框图。如图3所示,电子设备的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层(framework,FWK),安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。如图3所示,应用程序层可以包括相机、设置、皮肤模块、用户界面(user interface,UI)、三方应用程序等。其中,三方应用程序可以包括微信、QQ、图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息、语音助手功能等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数。如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件, 视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
此外,系统库还可以包括语音助手服务。其中,语音助手服务可以用于识别用户输入的语音数据,并识别语音数据包含的关键词,控制电子设备执行相关的操作。例如,电子设备可以通过如图2所示受话器170B或麦克风170C传输的用户语音,并识别该用户语音。假设用户语音为“播放电影”,电子设备可以识别到关键词为“播放”和“电影”,则电子设备可以开启能够播放电影的应用程序,播放电影。或者,电子设备可以播放已经存储的电影。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
硬件层可以包括各类传感器,例如本申请实施例中涉及的加速度传感器、陀螺仪传感器、触摸传感器等。
下面结合本申请实施例的电子设备的控制方法,示例性说明电子设备的软件以及硬件的工作流程。
在本申请实施例中,语音助手服务的每一个用户可以有一个用户标识,该用户标识可以是唯一标识一个用户的标识。例如,可以是用户的电话号码或者可以是华为账号等。用户可以通过用户标识和预先设置的密码在电子设备上登录用户标识。这里的用户标识可以标识一个用户的身份。其中,每一个用户标识可以关联至少一个电子设备的标识。例如,一个用户可以在手机、平板电脑、笔记本电脑等多个电子设备上登录用户标识。因此该用户的用户标识关联的电子设备的标识可以包括手机的标识、平板电脑的标识、笔记本电脑的标识。用户可以设置用户标识所关联的电子设备的标识,或者用户的语音助手可以确定登录用户标识的电子设备,从而可以将登录用户标识的电子设备与用户标识进行关联。
用户可以拥有几台设备,一台公共设备(如果家里的大屏)也可以被几个用户所有。用户标识可以是用户使用的语音助手标识。组网方式在一种可能的情况下为不同用户的语 音助手组成一个群组,其中用户A下指令给用户B的时候,用户A通过其设备上的语音助手传递信息给用户B的语音助手,由用户B的语音助手来执行指令。其中,传递的信息包括设置通知和设置的提醒等。
在一种可能的实现方式中,群组的信息可以包括用户的设备信息,用户A下指令的时候,可以直接通过自己的语音助手查询组网中其他设备的标识,直接发送指令到用户B的设备,而不通过用户B的语音助手。
另外,如果两个用户没有组成群组,则用户A的语音助手可以通过通讯录/应用程序(如微信、QQ等及时通信应用程序)查找用户B并发送控制消息给用户B的设备,以使用户B的设备执行相应指令。
用户A的电子设备发送指令到用户B的电子设备或用户B的语音助手时,还可以先弹出提示给用户B,用户B同意后,用户B的设备或用户B的语音助手执行相关指令。
在一种可能的实现方式中,不同的用户可以通过用户标识实现通信。比如,用户A想要给用户B发送一个提醒消息,提醒用户B在8点赴约。用户A可以向电子设备A输入指令“提醒用户B8点赴约”。电子设备A可以在通讯录中查找用户B,比如可以在通讯查找被命名为“用户B”的电话号码。如果电子设备A在通讯录中查找到被命名为“用户B”的电话号码,则可以向该电话号码发送一个“请在8点赴约”的短信息。或者,电子设备A可以根据用户B的电话号码查找用户B的语音助手,并向用户B的语音助手发送输入指令“提醒用户B8点赴约”,或者根据该指令生成的提醒消息。用户B的语音助手可以将该指令或者提醒消息发送给用户B的用户标识所关联的电子设备。用户B的电子设备可以显示该提醒消息,或者也可以生成一个在8点进行提醒的日程。可选的,用户B的电子设备在生成一个在8点进行提醒的日程之前,可以征求用户B的同意,比如可以通过显示屏显示“用户A提醒您8点赴约,是否生成在8点进行提醒的日程”,在用户B同意后用户B的电子设备可以生成一个在8点进行提醒的日程。
用户可以通过输入语音数据的方式唤醒语音助手服务。在一个示例中,用户可以通过输入指定文本内容的语音数据唤醒语音助手服务。该指定文本内容可以是用户在注册用于唤醒语音助手服务的语音数据时,使用的语音数据。电子设备接收到用户输入的语音数据后,可以对该语音数据进行文本识别判断是否存在指定文本内容。如果该语音数据中存在指定文本内容,则电子设备进入语音助手服务。在另一个示例中,用户可以通过输入随机的语音数据或者指定文本内容的语音数据唤醒语音助手服务。电子设备可以根据用户输入的语音数据,获取用户的声纹特征。电子设备可以将获取到的声纹特征与存储的声纹特征进行比对,在比对结果表示匹配成功时,电子设备可以进入语音助手服务。
用户可以通过触摸显示屏的方式,或者触碰电子设备上的实体按键的方式、或者通过预设的隔空手势使显示屏亮起。其中,触摸显示屏的方式可以包括例如单击显示屏、双击显示屏或者在屏幕上画一个预设的图案,如字母等。这里的图案可以是预先设置的,或者也可以是电子设备规定的,本申请不做具体限定。预设的隔空手势可以包括例如手掌向右滑动、手掌向左滑动、手指向右滑动或者手指向左滑动等,隔空手势可以是用户预先设置的,或者也可以是电子设备规定的,本申请不做具体限定。在显示屏亮起后,用户可以输入预先设置的语音数据,如用户可以说“你好”。电子设备可以接收用户输入的内容为“你好”的语音数据,并识别到该语音数据中包含唤醒词,因此电子设备进入语音助手服务。电子设备进入语音助手服务后可以亮屏,并通过显示屏显示提示信息,以提示用户进入了语音 助手服务。例如,电子设备可以在显示屏显示“我在呢”或者“有什么可以帮您”等内容,提示用户继续输入指令。可选的,电子设备也可以不亮屏,即保持屏幕是黑暗的状态,通过输出语音数据的方式提示用户进入了语音助手服务。电子设备可以通过输出内容为“我在呢”或者“有什么可以帮您”等的语音数据,以提示用户已经进入语音助手服务。
应理解,唤醒语音助手服务的指定文本内容可以是用户在电子设备上提前录制的,或者也可以是电子设备指定的。在一个示例中,如果用户想要通过输入语音数据的方式唤醒语音助手服务,则可以在电子设备上提前注册声纹。电子设备可以通过显示屏提示用户“请说,你好”,用户则可以根据提示说“你好”。电子设备可以根据用户输入的语音数据进行声纹识别,获取用户的声纹特征,并存储用户的声纹特征。可选的,为了提高声纹识别的准确率,电子设备还可以继续提示用户输入语音数据。电子设备可以在显示屏显示“请说,放点音乐”,用户则可以根据提示说“放点音乐”。在注册完成后,电子设备可以在显示屏显示注册完成的提示。用户可以根据电子设备的提示,多次输入语音数据,以便电子设备可以根据用户多次输入的语音数据,识别用户的声纹特征。
在用户通过输入语音数据的方式唤醒语音助手服务时,电子设备可以接收用户输入的语音数据,并对语音数据进行声纹识别,获取该语音数据的声纹特征。电子设备可以将获取到的声纹特征与存储的声纹特征进行对比,判断是否为同一人。如果是同一人,则可以唤醒语音助手服务。如果不是同一人,则无法唤醒语音助手服务。可选的,如果不是同一人,电子设备也可以通过显示屏提示用户未唤醒语音助手服务,或者可以提示用户重新输入语音数据。
在一种可能的实现方式中,多个用户可以通过各自的用户标识组成一个群组。用户1可以先建立一个群组,并且可以邀请想要邀请的用户加入已经建立的群组。多个用户可以采用面对面建立群组的方式建立一个群组。多个用户可以通过面对面建群的功能,在电子设备上输入相同的数字或文字等。电子设备可以将用户标识和用户输入的数字或文字发送给语音助手服务的服务器,语音助手服务的服务器可以查找在同一时间、同一地点且输入相同的数字或文字的用户标识,并为这些用户标识建立一个群组。语音助手服务的服务器可以通知每一个用户标识所对应电子设备,电子设备可以显示已经建立的群组。其中,用户可以在已经建立的群组中添加新的成员。比如,组内的成员可以邀请新的成员加入该群组。不仅如此,创建群组的群主也可以在该群组中移除任意一个群成员。
通过上述方式,用户通过用户标识创建群组后,可以在群组中分享一些信息,比如位置信息、用户状态等信息。例如,在一个群组中,群成员可以在群组中分享当前自己正在做什么,或者也可以分享自己在某一个时间会做什么等。参阅图4A,用户A可以在群组中将用户状态调整为正在工作,群组中的其他成员,如用户B和用户C则可以了解到用户A正在工作。可选的,用户A可以在群组中将用户状态设置为请勿打扰,群组中的其他成员,如用户B和用户C则可以了解到用户A现在不希望被打扰。
在一种可能的实现方式中,电子设备的语音助手服务可以收集用户设置的信息。比如,可以收集用户设置的日程信息、或者可以收集用户设置的闹钟信息等,调整用户状态。比如,用户设置了下午5点开始写作业的日程信息,电子设备在获取到该日程信息后,在到达下午5点时可以将用户状态调整为正在写作业。可选的,电子设备的语音助手服务也可以通过电子设备的传感器收集用户的状态信息。例如,可以通过电子设备的摄像头、音频模块、触摸传感器和压力传感器等采集用户的状态信息。
举例来说,电子设备可以通过摄像头采集用户当前正在做什么,比如用户在工作、写作业或者睡眠等。电子设备也可以通过音频模块采集用户的语音数据,并对语音数据进行文本识别,用来判断用户状态。电子设备也可以通过触摸传感器和压力传感器采集用户是否正在使用电子设备。
在群组中,群成员也可以分享自身所在的位置信息。参阅图4B,用户A可以在群组中分享自己的位置信息,该群组中的用户B和用户C可以通过用户A分享的位置信息,确定用户A当前所在位置,以及用户A当前所在位置与自身位置的距离。不仅如此,如果用户B想要了解如何到达用户A的所在位置,也可以通过快捷键或者语音指令,进入导航功能。比如,用户B想要了解如何到达用户A所在位置,用户B则可以说出“去找用户A”,用户B的电子设备则可以接收该语音数据,并进行文本识别。用户B的电子设备可以根据识别到的语音指令,进入导航功能查找从用户B所在位置到达用户A所在位置的方式。
在群组中,群成员之间也可以相互分享照片、视频或文件等信息。每一个群组可以具有一个共享文件夹,群成员可以将想要共享的照片、视频或文件等存储在共享文件夹中,群组内的任意一个群成员可以在共享文件夹中查看共享的照片、视频或文件等。不仅如此,也可以提醒某一个或某些群成员查看共享文件夹。以下,结合附图对本申请实施例中群组中的一个群成员为某一个或者某一些群成员设置提醒消息的方法。
参阅图5,为本申请实施例中电子设备的控制方法的示例性流程图,可以包括以下步骤:
501:第一电子设备接收用户在语音助手中输入的指令。
其中,第一电子设备可以接收用户在语音助手中输入的语音指令,或者以手动输入的方式输入的指令。第一电子设备可以通过音频模块接收用户输入的语音指令。
502:第一电子设备从用户输入的指令中识别待提醒的用户。
其中,第一电子设备可以对用户输入的指令进行文本识别,从该指令中识别到用户。比如,用户输入的指令为“提醒A查看群消息”,第一电子设备可以对该指令进行文本识别,可以识别到需要提醒的是A,因此第一电子设备可以确定待提醒的用户是A。
503:第一电子设备从语音助手群组中查找与待提醒的用户相关的用户标识。
应理解,这里的待提醒的用户可以是第一电子设备的用户为该用户设置的备注名称,或者也可以是用户自己设置的昵称。比如,用户输入的指令为“提醒妈妈看电视”,第一电子设备可以识别到待提醒的用户是“妈妈”。第一电子设备可以在语音助手群组中查找备注名称和昵称,确定“妈妈”的用户标识。
504:第一电子设备将第一消息发送给待提醒的用户的第二电子设备。
这里的第一消息可以是第一电子设备接收到的指令,或者也可以是第一电子设备根据用户输入的指令生成的提醒消息。比如,第一电子设备接收到了“提醒妈妈看电视”的指令,第一电子设备可以将该指令“提醒妈妈看电视”发送给第二电子设备。或者,第一电子设备也可以根据该指令生成一个提醒消息,例如“看电视啦”等。第一电子设备可以将该提醒消息发送给第二电子设备。
在一个示例中,第一电子设备可以通过语音助手将第一消息发送给待提醒的用户的语音助手。比如,第一电子设备确定待提醒的用户是“用户A”,因此可以将第一电子设备的语音助手可以将第一消息发送给用户A的语音助手。
在另一示例中,第一电子设备可以通过语音助手将第一消息发送给待提醒的用户的用 户标识所关联的电子设备中的部分或全部。比如,第一电子设备可以确定待提醒的用户的用户标识所关联的电子设备的使用情况,第一电子设备可以通过语音助手可以将指令或者提醒消息发送给待提醒的用户的用户标识所关联电子设备中正在使用的电子设备。其中,第一电子设备可以通过语音助手可以向待提醒的用户的用户标识所关联的电子设备发送获取用户是否正在使用电子设备的请求消息。待提醒的用户的用户标识所关联的电子设备可以根据传感器、摄像头和/或音频模块确定用户是否正在使用电子设备,并将获取到的结果发送给第一电子设备。
需要说明的是,电子设备可以根据压力传感器或者触摸传感器,确定用户是否正在使用目标设备。或者电子设备也可以通过摄像头确定用户是否正在使用电子设备。比如,电子设备可以开启摄像头,并通过摄像头识别到有人脸时,可以确定用户正在使用电子设备。或者电子设备可以通过音频模块确定用户是否正在使用电子设备。比如,电子设备可以开启音频模块,确定是否有用户说话,如果有用户说话则可以认为用户正在使用电子设备。
505:第二电子设备显示根据用户输入的指令生成的提醒消息。
其中,第二电子设备的语音助手接收到用户输入的指令后,可以将该指令发送给第二电子设备。第二电子设备可以根据指令生成对应的提醒消息。或者,第二电子设备的语音助手接收到用户输入的指令后,可以根据该指令生成对应的提醒消息,并将提醒消息发送给第二电子设备。
举例来说,用户A将一张图片上传至共享文件夹中,并想要提醒用户B查看该图片。因此,用户A可以通过手动输入的方式或者输入语音指令的方式,向用户A的语音助手输入指令。用户A的语音助手可以解析该指令,并生成对应的提醒消息。用户A的语音助手可以从语音助手群组中查找到用户B,用户A的语音助手可以将提醒消息发送给用户B的语音助手。由用户B的语音助手发送给用户B的电子设备,用户B的电子设备可以通过显示屏显示该提醒消息,如图6所示。其中,用户B的语音助手可以将该提醒消息发送给用户B的用户标识关联的电子设备中的全部电子设备,或者部分电子设备。例如,用户B的语音助手可以获取用户B的用户标识所关联的电子设备的当前使用情况,用户B的语音助手可以将该提醒消息发送给正在使用的电子设备。可选的,用户A的语音助手可以将用户A输入的指令发送给用户B的语音助手。并由用户B的语音助手对该指令进行解析,生成对应的提醒消息。
需要说明的是,用户A,如上述的“女儿”给用户B,如上述的“叫妈妈”设置提醒的时候,电子设备A可以时时将第一信息或者指令发送给电子设备B,电子设备B可以根据指令或者第一信息,在电子设备B上设置一个提醒消息。这里的第一信息可以包括事件信息和时间点。或者,电子设备A可以将指令或者第一信息保存在电子设备A中,在到达该间点时,将指令或者第一信息发送给电子设备B,由电子设备B根据时间点和事件信息进行提醒。其中指令可以包括设置提醒、调用应用程序和对周边设备进行控制等。
在一个示例中,用户A可以跟电子设备1的语音助手说,“在5月1日0时的时候,给用户B放一首生日歌”。则用户A的语音助手可以直接发送消息给用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。电子设备2可以设置一个提醒消息,在5月1日0时的时候打开可以播放音乐的应用程序播放生日歌。
在另一示例中,用户A的语音助手可以将指令“在5月1日0时的时候,给用户B放 一首生日歌”存储在电子设备1中。在到达5月1日0时的时候,将指令发送到用户B的到用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。电子设备2可以根据指令打开可以播放音乐的应用程序播放生日歌。可选的,电子设备1可以提前一段时间,将指令发送到用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。比如,电子设备1可以在4月30日的23点58分将指令发送给用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。由电子设备1在到达5月1日0时的时候,打开可以播放音乐的应用程序播放生日歌。
另外,如果用户A的语音助手在前述时间点判断出用户A和用户B在同一区域内,则选择一个空间内合适的设备来播放生日歌。比如,可以选择公放设备播放生日歌。
在一种可能的实现方式中,用户A可以跟电子设备1的语音助手说,“晚上睡觉的时候把B的空调调到22度,或者把用户B的空调调高一些”。用户A的语音助手可以直接发送消息给用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。电子设备2在达到预设的时间点或者检测到用户B进入休息状态时,将控制空调,将空调的温度调到22度,或者可以控制空调的温度在指定的较高温度的范围内。
另一个示例中,用户A的语音助手可以将指令“晚上睡觉的时候把B的空调调到22度,或者把用户B的空调调高一些”存储在电子设备1中。电子设备1在到达预设的时间点可以将指令发送到用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。电子设备2可以控制空调,将空调的温度调到22度,或者可以控制空调的温度在指定的较高温度的范围内。可选的,电子设备1可以提前预设的时间点一段时间,将指令发送到用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。
另外,如果用户A的语音助手在前述预设的时间点判断出用户A和用户B在同一区域内,则可以选择一个区域内合适的设备)来调节空调温度。如,可以选择用户A的设备、用户B的设备或者其他设备。
另一种可能的实现方式中,用户A的语音助手也可以在语音助手群组中查找到用户B的用户标识关联的电子设备,并向查找到的电子设备发送输入的指令或者生成的提醒消息。或者,用户A的语音助手可以确定用户B的用户标识所关联的电子设备的使用情况,用户A的语音助手可以将指令或者提醒消息发送给用户B的用户标识所关联电子设备中正在使用的电子设备。其中,电子设备A可以通过语音助手可以向用户B的用户标识所关联的电子设备发送获取用户是否正在使用电子设备的请求消息。用户B的用户标识所关联的电子设备可以根据传感器、摄像头和/或音频模块确定用户是否正在使用电子设备,并将获取到的结果发送给电子设备A。
在群组中,某一群成员也可以为其他群成员中的部分或全部设置提醒消息。比如,群成员用户A可以为用户B设置提醒消息。
在一个示例中,用户A与用户B不在同一个区域,用户A可以提醒用户B“需要吃药”。用户A可以通过手动输入的方式或者输入语音指令的方式,向用户A的语音助手输入提醒用户B吃药的相关指令。用户A的语音助手可以在语音助手群组中查找到用户B的语音助手,用户A的语音助手可以将用户A输入的指令或者根据输入的指令生成的提醒消息,发送给用户B的语音助手。其中,用户A的语音助手可以通过移动通信网络或者即时通讯消息将指令或者提醒消息发送给用户B的语音助手。用户B的语音助手可以根据指令生成相应的提醒消息,并发送给用户B的电子设备。或者,用户A的语音助手可以将指令或者 提醒消息通过移动数据网络或者即时通讯消息发送给用户B的用户标识所关联的电子设备。用户B的电子设备也可以通过响铃、震动或者语音的方式展示提醒消息,和/或用户B的电子设备也可以在显示屏显示提醒消息。
在另一示例中,用户A与用户B在同一个区域,用户A可以提醒用户B“上午8点有会议”。用户A可以通过手动输入的方式或者输入语音指令的方式,向用户A的语音助手输入提醒用户B上午8点有会议的相关指令。用户A的语音助手可以在语音助手群组中查找用户B的语音助手,用户A的语音助手可以将指令或者根据指令生成的提醒消息,发送给用户B的语音助手。其中,用户A的语音助手可以通过无线局域网、蓝牙、移动通信网络或者即时通讯消息将指令或者提醒消息发送给用户B的语音助手。用户B的语音助手可以将提醒消息发送给用户B的用户标识关联的全部或部分电子设备。或者,用户A的语音助手可以将指令或者提醒消息发送给用户B的用户标识所关联的电子设备中的全部或部分电子设备。其中,用户A的语音助手可以通过无线局域网、蓝牙、移动通信网络或者即时通讯消息将指令或者提醒消息发送给用户B的电子设备。用户B的电子设备也可以通过响铃、震动或者语音的方式展示提醒消息,和/或用户B的电子设备可以在显示屏显示提醒消息。
在一个示例中,群组中的每一个成员可以在电子设备上为其他成员设置对应的提醒方式。比如,群成员1可以为群成员2设置唯一一个铃声,群成员1可以为群成员3设置唯一一个铃声。在群成员1的电子设备接收到来自群成员2的提醒消息时,可以根据预先设置好的群成员2的铃声展示来自群成员2的提醒消息。
举例来说,参阅图7,在一个家庭群组中,女儿想要提醒妈妈吃药。因此,女儿可以通过输入语音指令的方式,向电子设备A说出“提醒妈妈吃药”。电子设备A可以通过麦克风或受话器接收该语音指令,并进行文本识别。电子设备A可以从语音指令中识别到需要提醒的用户,即“妈妈”。电子设备A可以在语音助手群组中查找与“妈妈”相关的用户。例如,电子设备A可以在语音助手群组中查找被备注为“妈妈”的用户。电子设备A可以通过语音助手将指令或者根据指令生成的提醒消息发送给查找到的用户,即“妈妈”的语音助手。“妈妈”的语音助手可以将指令或者提醒消息发送给“妈妈”的用户标识所关联的电子设备B。或者,电子设备A可以通过语音助手将指令或者生成的提醒消息发送给“妈妈”的用户标识所关联的电子设备B。电子设备B可以通过震动,并语音提示“女儿提醒您吃药啦~”的方式展示提醒消息。
需要说明的是,用户A,如上述的“女儿”给用户B,如上述的“叫妈妈”设置提醒的时候,电子设备A可以时时将第一信息或者指令发送给电子设备B,电子设备B可以根据指令或者第一信息,在电子设备B上设置一个提醒消息。这里的第一信息可以包括事件信息和时间点。或者,电子设备A可以将指令或者第一信息保存在电子设备A中,在到达该间点时,将指令或者第一信息发送给电子设备B,由电子设备B根据时间点和事件信息进行提醒。其中指令可以包括设置提醒、调用应用程序和对周边设备进行控制等。
在一个示例中,用户A可以跟电子设备1的语音助手说,“在5月1日0时的时候,给用户B放一首生日歌”。则用户A的语音助手可以直接发送消息给用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。电子设备2可以设置一个提醒消息,在5月1日0时的时候打开可以播放音乐的应用程序播放生日歌。
在另一示例中,用户A的语音助手可以将指令“在5月1日0时的时候,给用户B放 一首生日歌”存储在电子设备1中。在到达5月1日0时的时候,将指令发送到用户B的到用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。电子设备2可以根据指令打开可以播放音乐的应用程序播放生日歌。可选的,电子设备1可以提前一段时间,将指令发送到用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。比如,电子设备1可以在4月30日的23点58分将指令发送给用户B的语音助手,或者发送给用户B的用户标识关联的电子设备2。由电子设备1在到达5月1日0时的时候,打开可以播放音乐的应用程序播放生日歌。
另外,如果用户A的语音助手在前述时间点判断出用户A和用户B在同一区域内,则选择一个空间内合适的设备来播放生日歌。比如,可以选择公放设备播放生日歌。
在一种可能的实现方式中,用户A可以跟电子设备1的语音助手说,“晚上睡觉的时候把B的空调调到22度,或者把用户B的空调调高一些”。用户A的语音助手可以直接发送消息给用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。电子设备2在达到预设的时间点或者检测到用户B进入休息状态时,将控制空调,将空调的温度调到22度,或者可以控制空调的温度在指定的较高温度的范围内。
另一个示例中,用户A的语音助手可以将指令“晚上睡觉的时候把B的空调调到22度,或者把用户B的空调调高一些”存储在电子设备1中。电子设备1在到达预设的时间点可以将指令发送到用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。电子设备2可以控制空调,将空调的温度调到22度,或者可以控制空调的温度在指定的较高温度的范围内。可选的,电子设备1可以提前预设的时间点一段时间,将指令发送到用户B的语音助手或者发送给用户B的用户标识关联的电子设备2。
另外,如果用户A的语音助手在前述预设的时间点判断出用户A和用户B在同一区域内,则可以选择一个区域内合适的设备)来调节空调温度。如,可以选择用户A的设备、用户B的设备或者其他设备。
在一种可能的实现方式中,群组中的某一位成员还可以通过预约的方式为其他群成员设置提醒消息。举例来说,参阅图8,在一个家庭群组中,女儿想要提醒爸爸和妈妈下午7点有家庭聚餐。因此,女儿可以通过手动输入的方式或者输入语音指令的方式,向电子设备A输入指令。比如,女儿可以说出“提醒爸爸和妈妈下午7点有家庭聚餐”。电子设备A可以接收该语音指令,并进行文本识别。电子设备A可以识别语音指令,从语音指令中识别出需要提醒的用户,即“妈妈”和“爸爸”。电子设备A可以在语音助手群组中分别查找与“妈妈”和“爸爸”相关的用户。电子设备A可以将语音指令或者根据语音指令生成的提醒消息,发送给查找到的用户的语音助手群组。即,电子设备A可以将语音指令或者提醒消息分别发送给“妈妈”和“爸爸”的语音助手。“妈妈”的语音助手可以将提醒消息或者指令发送给“妈妈”的用户标识所关联的电子设备中的部分或全部的电子设备B。“爸爸”的语音助手可以将提醒消息或者语音指令发送给“爸爸”用户标识所关联的电子设备中的全部或部分电子设备C。电子设备B和电子设备C可以通过响铃的方式展示提醒消息。可选的,电子设备B和电子设备C在接收到指令时,可以分别创建一个日程,以便于在7点进行提醒该日程。
在一种可能的实现方式中,在群组中某一群成员也可以为其他群成员中的部分或全部定制日程。比如,在一个家庭群组中,用户A可以通过手动输入的方式或者输入语音指令的方式,为用户B定制周六的日程。参阅图9A,用户A可以说“为用户B制定周六日程”, 电子设备A可以接收该语音数据,并可以通过显示屏提示用户可以开始制定日程。如图9A所示,可以在显示设备显示“请开始制定日程”。用户A可以根据电子设备A的提示,为用户B指定日程。比如,用户A可以说“上午8点起床”,电子设备A可以识别该语音数据,并记录相关日程。用户A可以继续说“上午10点去参加音乐课程”,同样的电子设备A可以继续识别该语音数据,并记录相关日程。重复上述方式,用户A可以将为用户B制定的日程记录在电子设备A中。电子设备A可以在语音助手群组中查找用户B的语音助手。电子设备A可以通过语音助手将该制定完毕后的日程发送给用户B的语音助手。用户B的语音助手可以将用户A所制定的日程发送给电子设备B。电子设备B可以将接收到的日程通过显示屏进行显示,并根据该日程上的内容在电子设备B上创建日程,以便可以提醒用户B。可选的,电子设备A可以通过语音助手将制定完毕后的日程发送给用户B的用户标识所关联的电子设备中的部分或全部电子设备B。其中,电子设备A可以通过语音助手可以向用户B的用户标识所关联的电子设备发送获取用户是否正在使用电子设备的请求消息。用户B的用户标识所关联的电子设备可以根据传感器、摄像头和/或音频模块确定用户是否正在使用电子设备,并将获取到的结果发送给电子设备A。电子设备A可以通过语音助手将制定完毕后的日程发送给用户正在使用的电子设备。
可选的,电子设备B在接收到用户A为用户B制定的日程后,可以通过显示屏提示用户B有用户为其制定了日程。参阅图9B,电子设备B可以通过显示屏显示“用户A为您制定了日程,请您查看”,用户B可以选择查看或者不查看。可选的,用户B还可以选择是否接受用户A为其制定的日程。参阅图9C,用户B可以通过手动输入的方式或者输入语音指令的方式,选择接受或者拒绝用户A制定的日程。如果用户B接受用户A为其制定的日程,则电子设备B可以根据该日程上的内容创建日程,如果用户B拒绝用户A为其制定的日程,电子设备B则无需创建日程。另外,电子设备B也可以向电子设备A反馈用户B是否接受制定的日程,电子设备A可以通过显示屏显示用户B的选择。
基于上述方案,语音助手服务可以使得更多的用户参与进来,实现了多个用户之间的互动,可以更加便捷的为用户提供语音助手服务。
本申请实施例提供的电子设备的控制方法中,在用户A需要使用到语音助手服务时,还可以考虑到电子设备所处的环境,可以选择合适的方式为用户A提供相关服务。以下,结合附图进行介绍。
参阅图10,为本申请实施例提供的电子设备的控制方法的示例性流程图,包括以下步骤。
1001:电子设备接收用户输入的控制指令。
其中,电子设备具有用户标识,该用户标识可以用于标识用户的身份信息,该用户标识可以用于登录语音助手服务。电子设备可以先唤醒语音助手服务。用户可以输入预先设置的指定文本内容的语音数据,唤醒语音助手服务。在唤醒语音助手服务后,语音助手服务可以通过显示屏指示用户输入控制指令。用户可以通过手动输入的方式,向电子设备输入“播放音乐”的控制指令。或者,用户可以说“播放音乐”,向电子设备输入语音的控制指令。电子设备可以通过如图2所示的受话器170B或者麦克风170C接收用户输入的语音数据。电子设备可以对语音数据进行文本识别,得到控制指令。
可选的,如果电子设备没有接收到用户输入的语音数据,则可以通过显示设备提示用户重新输入语音数据。比如,由于外部环境嘈杂而用户的声音较小,因此电子设备没有接 收到用户输入的语音数据。参阅图11A,电子设备可以通过显示设备显示,“您说什么,我没有听见”等提示消息,提示用户重新输入语音数据。
1002:获取电子设备在一个区域内的至少一个目标设备。可以理解成获取电子设备临近的其他设备,其他设备可以与该电子设备同处一个区域。
其中,一个区域可以是空间的概念,可以是指一个室内。例如,可以是一个办公室内,或者可以是一个住宅内等。可选的,一个区域也可以是指短距离通信可以通信的一个范围。例如,可以是支持蓝牙通信的范围,支持紫蜂(zigbee)通信的范围等。一个区域也可以是指电子设备能够连接到同一网关设备的区域。比如,可以是连接到同一无线局域网(wireless local area network,WLAN)的区域,或者可以是连接到同一无线接入点(access point,AP)的区域。
举例来说,电子设备可以搜索蓝牙设备,参阅图12,电子设备搜索到的蓝牙设备可以作为前述至少一个目标设备。又例如,电子设备可以接收来自连接的网关设备发送的设备信息。这些设备信息可以是与该网关通信连接的设备的信息,因此这些设备信息表示的设备作为前述至少一个目标设备。
可选的,目标设备可以是与电子设备在一个区域内的,且与电子设备在同一个语音助手服务群组的设备。电子设备也可以通过语音助手服务群组中,群成员分享的位置信息确定至少一个目标设备。比如,电子设备可以在语音助手服务群组中根据群成员分享的位置信息,确定群成员的位置与自身位置的距离。在计算得到的多个距离中,确定小于或等于指定值的距离,并将该距离对应的设备作为至少一个目标设备。
1003:电子设备向至少一个目标设备发送获取当前用户状态的请求消息。
其中,该请求消息可以携带电子设备的用户标识。电子设备可以通过移动通信网络、无线局域网或者蓝牙设备向至少一个目标设备发送获取当前用户状态的请求消息。电子设备也可以通过第三方设备向至少一个目标设备转发获取当前用户状态的请求消息。这里的第三方设备可以是电子设备与至少一个目标设备连接的同一个网关设备,或者也可以是电子设备已经登录的语音助手服务的服务器。
在电子设备通过已经登录的语音助手服务的服务器向至少一个目标设备转发获取当前用户状态的请求消息时,电子设备与至少一个目标设备可以在同一个语音助手服务的群组。电子设备可以将获取当前用户状态的请求消息发送给语音助手服务的服务器,该服务器可以向至少一个目标设备登录的语音助手服务发送获取当前用户状态的请求消息,并由目标设备登录的语音助手服务将该请求消息发送给目标设备。可选的,电子设备已经登录的语音助手服务的服务器可以将该请求消息发送给至少一个目标设备。比如,可以通过移动通信网络或无线局域网发送给至少一个目标设备。
其中,当前用户状态可以是指在目标设备可以监控的范围内是否存在用户,如果存在用户,该用户当前正在做什么。比如,在目标设备监控到的范围内存在一个用户,该用户在睡眠中、或者在学习中等等。或者,当前用户状态也可以是用户所需的外部环境。比如,用户需要外部环境是安静的、或者用户对外部环境的噪声大小没有要求等等。如果在目标设备可以监控的范围内不存在用户,则当前用户状态可以是不存在用户。
一种可能的实现方式中,目标设备可以根据压力传感器或者触摸传感器,确定用户是否正在使用目标设备。比如,如果用户正在使用目标设备则可以认为存在用户,用户对外部环境的噪声大小没有要求。
另一种可能的实现方式中,目标设备可以通过摄像头确定当前用户状态。比如,目标设备可以开启摄像头,确定是否存在用户;如果有用户,用户是否在工作、学习或者睡眠等。
再一种可能的实现方式中,目标设备可以通过音频模块确定当前用户状态。比如,目标设备可以开启音频模块,确定是否存在用户。目标设备可以开启麦克风确定是否有用户说话,如果有用户说话则可以认为存在用户,如果没有用户说话则可以认为不存在用户。
应理解,目标设备可以通过压力传感器或触摸传感器、摄像头和音频模块确定当前用户状态。比如,目标设备可以开启摄像头和音频模块,目标设备通过摄像头确定存在用户,但通过音频模块确定不存在用户输入语音数据,因此目标设备可以认为用户当前正在工作、学习或者睡眠,用户需要外部环境是安静的。
目标设备可以开启摄像头和麦克风。目标设备通过摄像头确定有人脸存在,因此可以认为存在用户。目标设备又通过麦克风确定用户未输入语音数据,因此目标设备可以认为当前用户正在工作、学习或者睡眠。所以,目标设备可以确定当前用户状态为有用户存在,且用户正在学习、工作或者睡眠,即用户需要外部环境是安静的。
其中,目标设备在获取当前用户状态前可以获得用户的授权。比如,目标设备在每一次获取当前用户状态前可以获得用户的授权,在得到用户的授权后可以通过上述方法获取当前用户状态。又比如,目标设备在第一次获取当前用户状态前可以获得用户的授权,在获得用户的授权后,目标设备之后的每一次获取当前用户状态时可以默认已经获得用户的授权。
在本申请实施例中,目标设备可以是诸如手机、平板电脑、可穿戴设备(例如,手表、手环、智能头盔等)、车载设备、智能家居、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等。
参阅图11B,电子设备可以确定一个区域内的至少一个目标设备。其中,电子设备确定一个区域内的目标设备包含摄像头A和手机B。电子设备可以向摄像头A和手机B发送获取当前用户状态的请求消息。因此,摄像头A在可以扫描到的范围内确定是否存在用户,以及用户正在做什么。手机B可以通过传感器、摄像头和音频模块中的至少一个获取当前用户状态。摄像头A和手机B可以分别将获取到的当前用户状态发送给电子设备A。
1004:至少一个目标设备将获取到的至少一个当前用户状态发送给电子设备。
其中,目标设备可以通过移动通信网络、无线局域网或者蓝牙设备向电子设备发送获取到的至少一个当前用户状态。目标设备也可以通过第三方设备向电子设备转发获取到的至少一个当前用户状态。这里的第三方设备可以是目标设备与电子设备连接的同一个网关设备,或者也可以是目标设备已经登录的语音助手服务的服务器。
在至少一个目标设备通过已经登录的语音助手服务的服务器向电子设备转发获取到的至少一个当前用户状态时,至少一个目标设备与电子设备可以在同一个语音助手服务的群组。至少一个目标设备可以将获取到的至少一个当前用户状态发送给语音助手服务的服务器,该服务器可以向电子设备登录的语音助手服务发送获取到的至少一个当前用户状态,并由电子设备登录的语音助手服务将至少一个当前用户状态发送给电子设备。可选的,至少一个目标设备已经登录的语音助手服务的服务器可以将至少一个当前用户状态发送给电子设备。比如,可以通过移动通信网络或无线局域网发送给电子设备。
1005:电子设备根据至少一个当前用户状态,执行控制指令。
其中,如果至少一个当前用户状态中存在用户正在学习、工作或睡眠,即存在用户需要外部环境是安静的当前用户状态时,电子设备在执行控制指令时可以将音量控制在指定音量范围内。如果至少一个当前用户状态中不存在用户正在学习、工作或睡眠,即不存在用户需要外部环境是安静的当前用户状态时,电子设备在执行控制指令时可以确定当前网络连接中的至少一个周边设备,并可以通过至少一个周边设备执行控制指令。这里的指定音量范围可以是预先设置的,例如可以是音量5-音量10等,本申请不做具体限定。
举例来说,在控制指令是“放点音乐”时,如果电子设备接收到的至少一个当前用户状态中存在用户正在学习、工作或睡眠时,电子设备在播放音乐时可以将音量控制在指定音量范围内。如果电子设备接收到的至少一个当前用户状态中不存在用户正在学习、工作或睡眠时,电子设备可以查找当前网络连接中的周边设备。如果电子设备查找到当前网络连接中存在音频设备,如蓝牙音箱,则电子设备可以通过音频设备播放音乐。
在电子设备确定当前网络连接中不存在周边设备时,电子设备可以执行控制指令。比如,在控制指令是“放点音乐”时,且电子设备接收到的至少一个当前用户状态中不存在用户正在学习、工作或睡眠时,电子设备在当前网络连接中未查找到周边设备,则电子设备可以打开可以播放音乐的应用程序,并播放音乐。其中,电子设备在播放音乐时,音量可以大于前述指定音量范围。
在一种可能的实现方式中,电子设备在执行控制指令后,目标设备也可以接收来自用户的语音数据。比如,用户A向电子设备1输入“放点音乐”的语音控制指令,电子设备1通过上述步骤1002-步骤1004获取到来自目标设备2的一个当前用户状态。其中,获取到的一个当前用户状态中存在用户B正在工作、学习或睡眠,因此电子设备1可以打开可以播放音乐的应用程序,并将音量控制在指定音量范围内。此时,用户B可以向目标设备2输入语音数据,例如“我在学习”,或者“小声点”等语音数据,提示用户A将音量降低。目标设备2在接收到该语音数据后,可以进行文本识别。目标设备2可以将识别结果发送给电子设备1,以使电子设备1可以根据该识别结果降低音量。其中,电子设备1降低的音量大小可以是预先设置的。
参见图13A,用户A在客厅想要播放音乐,因此可以唤醒用户A的电子设备1,并输入语音控制指令,例如“放点音乐”。电子设备1在执行播放音乐的操作之前可以先根据该语音指令确定至少一个目标设备。电子设备1可以通过连接的网关设备2,确定至少一个目标设备。其中,电子设备1接收到来自网关设备2发送的与网关设备2通信连接的设备信息。电子设备1根据设备信息可以确定存在目标设备3和目标设备4。电子设备1可以通过网关设备2向目标设备3和目标设备4发送获取当前用户状态的请求消息。目标设备3和目标设备4接收前述请求消息,并打开摄像头和/或音频模块,获取当前用户状态。其中,目标设备3确定存在用户,且用户正在工作,目标设备4确定不存在用户。目标设备3和目标设备4将获取到的当前用户状态通过网关设备2发送给电子设备1。电子设备1确定当前用户状态中存在用户正在工作的状态,因此电子设备1可以打开播放音乐的应用程序并根据该用户正在工作的状态将音量控制在较低的范围内(如音量大小为10)。
参见图13B,目标设备3接收到用户B的语音指令,例如该语音指令可以为“小声点”。目标设备3可以将该语音指令或者根据该语音指令生成的提醒消息发送给所属区域内的全部电子设备。可选的,目标设备3首先确定自身是否在运行媒体类服务,如果自身在运行 媒体类服务则可以响应该语音指令,并降低媒体类服务的音量。如果自身未在运行媒体类服务则可以将该语音指令或者根据该语音指令生成的提醒消息发送给所属区域内的全部电子设备。
在一个示例中,目标设备3可以确定在接收到该语音指令之前的一段时间内,是否发送过当前用户状态。如果目标设备3发送过当前用户状态,则可以将该语音指令或者根据该语音指令生成的提醒消息发送给接收前述当前用户状体的设备。比如,目标设备3在接收到该语音指令之前的一段时间内,向电子设备1发送过当前用户状态,因此目标设备3可以将该语音指令或者提醒消息发送给电子设备1。
另一个示例中,目标设备3可以确定所属区域内的其他设备是否正在运行媒体类服务。比如,目标设备3可以向所属区域内的其他设备发送获取当前设备状态的请求消息。其他设备可以确定当前自身是否正在运行媒体类服务,并将获取到的当前设备状态发送给目标设备3。可选的,其他设备可以在自身正在运行媒体类服务时,向目标设备3发送响应消息,以告知目标设备3自身当前正在运行媒体类服务。目标设备3可以向当前正在运行媒体类服务的其他设备发送该语音指令或者根据该语音指令生成的提醒消息。电子设备1在接收到该语音指令或该语音指令对应的提醒消息后,可以响应于该语音指令或者提醒消息,将当前音乐的音量降低,如调低至5。可选的,电子设备1还可以通过显示屏显示“当前用户B需要将音乐音量降低”等提示消息。或者,电子设备1也可以在将音乐的音量降低前,征求用户A的同意,例如,可以在显示屏显示“当前用户B需要将音量降低,是否同意”,并在用户A同意时,将音乐的音量降低。其中,用户A可以通过手动输入的方式或者通过输入语音指令的方式,向电子设备1输入是否同意的指令。
在一种可能的实现方式中,电子设备在执行控制指令时,还可以获取当前网络连接中的至少一个控制设备。这里的控制设备可以是用于控制智能家电或者车机的设备。电子设备可以向控制设备发送控制指令,使得控制设备可以根据控制指令控制智能家电或者车机。
参见图14A,用户A在客厅想看电影,因此可以唤醒电子设备1,并输入语音控制指令,如“放个电影”。电子设备1通过连接的网关设备2确定了目标设备3。电子设备1可以向目标设备3发送获取当前用户状态的请求消息。目标设备3可以接收该请求消息,并打开摄像头和音频设备。目标设备3确定不存在用户,并将该当前用户状态发送给电子设备1。电子设备1可以确定获取到的当前用户状态中不存在用户正在学习、工作或睡眠的当前用户状态,因此电子设备1可以通过当前网络连接中的大屏设备,将电影通过大屏设备进行播放。可选的,电子设备1可以在显示屏中显示当前网络连接中有大屏设备,正在使用大屏设备播放电影的提示消息。电子设备1在显示屏显示当前网络连接中有大屏设备,是否使用大屏设备播放电影的请求消息,可以在用户同意后,通过大屏设备播放电影。其中,用户可以通过手动输入或者语音输入是否同意的指令。
在一种可能的实现方式中,该场景中还可以包含至少一个控制设备,如图14B中所示的控制设备4和控制设备5。电子设备1可以确定如果播放电影,会需要一个较为黑暗的环境,因此电子设备1可以向控制设备4发送关闭窗帘的控制指令,向控制设备5发送关闭照明灯的控制指令。因此,控制设备4可以根据电子设备1的控制指令关闭窗帘,控制设备5可以根据电子设备1的控制指令关闭照明灯。可选的,电子设备1可以在向控制设备4和控制设备5发送控制指令前,确定用户A是否同意。比如,电子设备1可以在显示屏中显示“是否关闭窗帘”和“是否关闭照明灯”的提示消息。用户A可以通过手动输入或者 语音输入是否同意的指令,电子设备1可以在用户输入同意的指令后,向控制设备4和控制设备5发送控制指令。
参阅图14C,目标设备3接收到了来电请求,由于有用户需要接听目标设备3的来电请求,因此目标设备3可以向电子设备1发送当前用户状态,指示当前用户需要外部环境是安静的。电子设备1可以根据目标设备3发送的当前用户状态,将当前播放媒体的音量(如图14C的场景下,可以是调低目前播放媒体的电视的音量)降低到预定范围内。
参阅图14D,用户A在客厅想看电影,因此可以唤醒电子设备1,并输入语音控制指令,如“放个电影”。电子设备1可以确定所属区域内存在目标设备3和目标设备4。其中,目标设备3通过摄像头确定有一个或多个用户在大屏设备前,目标设备3可以确定有一个或多个用户正在等待看电影,所以可以向电子设备1发送有一个或用户正在等待看电影的当前用户状态。电子设备1可以通过当前网络连接中的大屏设备播放电影。
可选的,上述目标设备3可以是大屏设备,大屏设备可以通过摄像头确定是否有一个或多个人脸,或者可以通过摄像头确定是否有一个或多个用户正在注视着大屏设备。如果大屏设备确定有一个或多个人脸,或者大屏设备确定有一个或多个用户正在注视着大屏设备,则大屏设备可以确定当前有一个或多个用户正在等待看电影。
参阅图15,用户想要在车内播放音乐,因此可以唤醒电子设备1,并输入语音控制指令。电子设备1可以确定当前不存在用户正在学习、工作或者睡眠,因此可以打开可以播放音乐的应用程序,并可以通过车内的公放设备2播放音乐。假设电子设备1接收到了来电请求,电子设备1可以确定如果想要接听来电请求则需要一个较为安静的环境,因此电子设备1可以向车机设备3发送关闭车窗的控制指令。车机设备3可以根据电子设备1发送的控制指令,关闭车窗。可选的,电子设备1可以在向车机设备3发送关闭车窗的控制指令前,在显示屏显示“是否关闭车窗”的控制指令,并在用户输入同意的指令后,将控制指令发送给车机设备3。
如图16所示,本申请另外一些实施例公开了一种电子设备1600,该电子设备可以包括:包括一个或多个处理器1601;一个或多个存储器1602和一个或多个收发器1603;其中,所述一个或多个存储器1602存储有一个或多个计算机程序,所述一个或多个计算机程序包括指令。示例性的,图16中示意出了一个处理器1601以及一个存储器1602。当所述指令被所述一个或多个处理器1601执行时,使得所述电子设备1600执行以下步骤:接收用户通过所述电子设备上的语音助手输入的语音指令;确定所属区域内的至少一个用户的当前用户状态;根据所述至少一个用户的当前用户状态,响应所述语音指令。
在一种设计中,处理器1601具体可以执行以下步骤:确定所属区域内的至少一个目标设备;通过收发器1603向所述至少一个目标设备发送第一请求消息,所述第一请求消息用于获取所述当前用户状态。所述收发器1603接收来自至少一个目标设备的至少一个当前用户状态。
在一种设计中,处理器1601具体可以执行以下步骤:若所述至少一个当前用户状态中存在第一用户状态,则执行所述语音指令所对应的操作;所述第一用户状态表示用户需求的噪声环境;或者若所述至少一个当前用户状态中不存在所述第一用户状态,则查找当前网络连接中的至少一个周边设备;通过所述至少一个周边设备执行所述语音指令所对应的操作。
在一种设计中,所述至少一个目标设备具有目标用户标识,所述电子设备具有用户标 识;所述用户标识与所述目标用户标识在同一个语音助手群组中。
在一种设计中,处理器1601具体可以执行以下步骤:响应于所述语音指令,生成所述第一信息;所述语音指令包含事件信息和时间点。所述收发器1603将所述第一信息发送给所述至少一个目标设备。
在一种设计中,当所述指令被所述一个或多个处理器1601执行时,使得所述电子设备1600执行以下步骤:接收来自第一电子设备的第一请求消息,所述第一请求消息用于第一电子设备获取当前用户状态;获取当前用户状态;将所述当前用户状态发送给所述第一电子设备。
在一种设计中,处理器1601可以具体执行以下步骤:采用传感器获取当前用户状态;和/或采集用户的设置信息,获取当前用户状态。
在一种设计中,处理器1601可以具体执行以下步骤:通过收发器1603接收第一信息;所述第一信息包含事件信息和时间点。根据所述时间点显示所述事件信息。
在一种设计中,当所述指令被所述一个或多个处理器1601执行时,使得所述电子设备1600执行以下步骤:接收用户通过所述电子设备上的语音助手输入的语音指令;响应所述语音指令,将所述语音指令发送给第二电子设备;所述电子设备具有第一用户标识,所述第二电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
在一种设计中,所述处理器1601可以具体执行以下步骤:响应所述语音指令,生成对应的第一消息;所述第一消息包含事件信息和时间点;通过收发器1603将所述第一消息发送给所述第二电子设备。
在一种设计中,收发器1603通过所述电子设备上的语音助手将所述语音指令,发送给对应所述第二用户标识的语音助手。
在一种设计中,当所述指令被所述一个或多个处理器1601执行时,使得所述电子设备1600执行以下步骤:通过收发器1603接收来自第一电子设备的语音指令;根据所述语音指令生成第一消息,所述第一消息包含事件信息和时间点;根据所述时间点显示所述事件信息。或者
通过收发器1603接收来自第一电子设备的第一消息,所述第一消息包含事件信息和时间点。根据所述时间点显示所述事件信息。所述第一电子设备具有第一用户标识,所述电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
在一种设计中,收发器1603通过语音助手接收来自所述第一电子设备的语音助手的所述第一消息。
需要说明的是,本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。本发明实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。例如,上述实施例中,第一获取单元和第二获取单元可以是同一个单元,也不同的单元。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述实施例中所用,根据上下文,术语“当…时”可以被解释为意思是“如果…”或“在…后”或“响应于确定…”或“响应于检测到…”。类似地,根据上下文,短语“在确定…时”或“如果检测到(所陈述的条件或事件)”可以被解释为意思是“如果确定…”或“响应于确定…” 或“在检测到(所陈述的条件或事件)时”或“响应于检测到(所陈述的条件或事件)”。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如DVD)、或者半导体介质(例如固态硬盘)等。
为了解释的目的,前面的描述是通过参考具体实施例来进行描述的。然而,上面的示例性的讨论并非意图是详尽的,也并非意图要将本申请限制到所公开的精确形式。根据以上教导内容,很多修改形式和变型形式都是可能的。选择和描述实施例是为了充分阐明本申请的原理及其实际应用,以由此使得本领域的其他技术人员能够充分利用具有适合于所构想的特定用途的各种修改的本申请以及各种实施例。

Claims (30)

  1. 一种电子设备的控制方法,其特征在于,所述方法包括:
    电子设备接收用户通过所述电子设备上的语音助手输入的语音指令;
    所述电子设备确定所属区域内的至少一个用户的当前用户状态;
    所述电子设备根据所述至少一个用户的当前用户状态,响应所述语音指令。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备确定所属区域内的至少一个用户的当前用户状态,包括:
    所述电子设备确定所属区域内的至少一个目标设备;
    所述电子设备向所述至少一个目标设备发送第一请求消息,所述第一请求消息用于获取所述当前用户状态;
    所述电子设备接收来自至少一个目标设备的至少一个当前用户状态。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备根据所述至少一个用户的当前用户状态,响应所述语音指令,包括:
    若所述至少一个当前用户状态中存在第一用户状态,则所述电子设备执行所述语音指令所对应的操作;所述第一用户状态表示用户需求的噪声环境;或者
    若所述至少一个当前用户状态中不存在所述第一用户状态,则所述电子设备查找当前网络连接中的至少一个周边设备;所述电子设备通过所述至少一个周边设备执行所述语音指令所对应的操作。
  4. 根据权利要求1-3任一所述的方法,其特征在于,所述至少一个目标设备具有目标用户标识,所述电子设备具有用户标识;所述用户标识与所述目标用户标识在同一个语音助手群组中。
  5. 根据权利要求1所述的方法,其特征在于,所述电子设备根据所述至少一个用户的当前用户状态,响应所述语音指令,包括:
    所述电子设备响应于所述语音指令,生成第一信息;所述语音指令包含事件信息和时间点;
    所述电子设备将所述第一信息发送给所述至少一个目标设备。
  6. 一种电子设备的控制方法,其特征在于,包括:
    目标设备接收来自电子设备的第一请求消息,所述第一请求消息用于电子设备获取当前用户状态;
    所述目标设备获取当前用户状态;
    所述目标设备将所述当前用户状态发送给所述电子设备。
  7. 根据权利要求6所述的方法,其特征在于,所述目标设备获取当前用户状态,包括:
    所述目标设备采用传感器获取所述当前用户状态;和/或
    所述目标设备采集用户的设置信息,获取所述当前用户状态。
  8. 根据权利要求7所述的方法,其特征在于,所述目标设备具有目标用户标识,所述电子设备具有用户标识;所述用户标识与所述目标用户标识在同一个语音助手群组中。
  9. 根据权利要求8所述的方法,其特征在于,还包括:
    所述目标设备接收第一信息;所述第一信息包含事件信息和时间点;
    所述目标设备根据所述时间点显示所述事件信息。
  10. 一种电子设备的控制方法,其特征在于,所述方法包括:
    第一电子设备接收用户通过所述第一电子设备上的语音助手输入的语音指令;
    所述第一电子设备响应所述语音指令,将所述语音指令发送给第二电子设备;所述第一电子设备具有第一用户标识,所述第二电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
  11. 根据权利要求10所述的方法,其特征在于,所述第一电子设备响应所述语音指令,将所述语音指令发送给第二电子设备,包括:
    所述第一电子设备响应所述语音指令,生成对应的第一消息;所述第一消息包含事件信息和时间点;
    所述第一电子设备将所述第一消息发送给所述第二电子设备,以便所述第二电子设备根据所述时间点显示所述事件信息。
  12. 根据权利要求10或11所述的方法,其特征在于,所述第一电子设备响应所述语音指令,将所述语音指令发送给第二电子设备,包括:
    所述第一电子设备通过所述第一电子设备上的语音助手将所述语音指令,发送给对应所述第二用户标识的语音助手。
  13. 一种电子设备的控制方法,其特征在于,包括:
    第二电子设备接收来自第一电子设备的语音指令;所述第二电子设备根据所述语音指令生成第一消息,所述第一消息包含事件信息和时间点;所述第二电子设备根据所述时间点显示所述事件信息;或者
    第二电子设备接收来自第一电子设备的第一消息,所述第一消息包含事件信息和时间点;所述第二电子设备根据所述时间点显示所述事件信息;
    其中,所述第一电子设备具有第一用户标识,所述第二电子设备具有第二用户标识,所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
  14. 根据权利要求13所述的方法,其特征在于,所述第二电子设备接收来自所述第一电子设备的所述第一消息,包括:
    所述第二电子设备通过语音助手接收来自所述第一电子设备的语音助手的所述第一消息。
  15. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:
    接收用户通过所述电子设备上的语音助手输入的语音指令;
    确定所属区域内的至少一个用户的当前用户状态;
    根据所述至少一个用户的当前用户状态,响应所述语音指令。
  16. 根据权利要求15所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,使得所述电子设备具体执行以下步骤:
    确定所属区域内的至少一个目标设备;
    向所述至少一个目标设备发送第一请求消息,所述第一请求消息用于获取所述当前用 户状态;
    接收来自至少一个目标设备的至少一个当前用户状态。
  17. 根据权利要求15或16所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,使得所述电子设备具体执行以下步骤:
    若所述至少一个当前用户状态中存在第一用户状态,则执行所述语音指令所对应的操作;所述第一用户状态表示用户需求的噪声环境;或者
    若所述至少一个当前用户状态中不存在所述第一用户状态,则查找当前网络连接中的至少一个周边设备;通过所述至少一个周边设备执行所述语音指令所对应的操作。
  18. 根据权利要求15-17任一所述的电子设备,其特征在于,所述至少一个目标设备具有目标用户标识,所述电子设备具有用户标识;所述用户标识与所述目标用户标识在同一个语音助手群组中。
  19. 根据权利要求15所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,使得所述电子设备具体执行以下步骤:
    响应于所述语音指令,生成所述第一信息;所述语音指令包含事件信息和时间点;
    将所述第一信息发送给所述至少一个目标设备。
  20. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述电子设备执行以下步骤:
    接收来自第一电子设备的第一请求消息,所述第一请求消息用于第一电子设备获取当前用户状态;
    获取当前用户状态;
    将所述当前用户状态发送给所述第一电子设备。
  21. 根据权利要求20所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,使得所述电子设备具体执行以下步骤:
    采用传感器获取所述当前用户状态;和/或
    采集用户的设置信息,获取所述当前用户状态。
  22. 根据权利要求20或21所述的电子设备,其特征在于,所述电子设备具有目标用户标识,所述第一电子设备具有用户标识;所述用户标识与所述目标用户标识在同一个语音助手群组中。
  23. 根据权利要求22所述的电子设备,其特征在于,当所述指令被所述电子设备执行时,使得所述电子设备还执行以下步骤:
    接收第一信息;所述第一信息包含事件信息和时间点;
    根据所述时间点显示所述事件信息。
  24. 一种第一电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中, 所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述第一电子设备执行以下步骤:
    接收用户通过所述第一电子设备上的语音助手输入的语音指令;
    响应所述语音指令,将所述语音指令发送给第二电子设备;所述第一电子设备具有第一用户标识,所述第二电子设备具有第二用户标识;所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
  25. 根据权利要求24所述的设备,其特征在于,当所述指令被所述电子设备执行时,使得所述第一电子设备具体执行以下步骤:
    响应所述语音指令,生成对应的第一消息;所述第一消息包含事件信息和时间点;
    将所述第一消息发送给所述第二电子设备,以便所述第二电子设备根据所述时间点显示所述事件信息。
  26. 根据权利要求24或25所述的设备,其特征在于,当所述指令被所述电子设备执行时,使得所述第一电子设备具体执行以下步骤:
    通过所述第一电子设备上的语音助手将所述语音指令,发送给对应所述第二用户标识的语音助手。
  27. 一种第二电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述电子设备执行时,使得所述第二电子设备执行以下步骤:
    接收来自第一电子设备的语音指令;根据所述语音指令生成第一消息,所述第一消息包含事件信息和时间点;所述第二电子设备根据所述时间点显示所述事件信息;或者
    接收来自第一电子设备的第一消息,所述第一消息包含事件信息和时间点;所述第二电子设备根据所述时间点显示所述事件信息;
    其中,所述第一电子设备具有第一用户标识,所述第二电子设备具有第二用户标识,所述第一用户标识和所述第二用户标识在同一个语音助手群组中。
  28. 根据权利要求27所述的设备,其特征在于,当所述指令被所述电子设备执行时,使得所述第二电子设备具体执行以下步骤:
    通过语音助手接收来自所述第一电子设备的语音助手的所述第一消息。
  29. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-5中任一项所述的方法或者执行如权利要求6-9中任一项所述的方法或者执行如权利要求10-12中任一项所述的方法或者执行如权利要求13-14中任一项所述的方法。
  30. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在电子设备上运行时,使得所述电子设备执行如权利要求1-5中任一项所述的方法或者执行如权利要求6-9中任一项所述的方法或者执行如权利要求10-12中任一项所述的方法或者执行如权利要求13-14中任一项所述的方法。
PCT/CN2021/116074 2020-10-31 2021-09-01 一种电子设备的控制方法和装置 WO2022088964A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21884695.4A EP4221172A4 (en) 2020-10-31 2021-09-01 CONTROL METHOD AND APPARATUS FOR ELECTRONIC DEVICE
US18/250,511 US20230410806A1 (en) 2020-10-31 2021-09-01 Electronic device control method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011198245.1A CN114449110B (zh) 2020-10-31 2020-10-31 一种电子设备的控制方法和装置
CN202011198245.1 2020-10-31

Publications (1)

Publication Number Publication Date
WO2022088964A1 true WO2022088964A1 (zh) 2022-05-05

Family

ID=81357678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116074 WO2022088964A1 (zh) 2020-10-31 2021-09-01 一种电子设备的控制方法和装置

Country Status (4)

Country Link
US (1) US20230410806A1 (zh)
EP (1) EP4221172A4 (zh)
CN (1) CN114449110B (zh)
WO (1) WO2022088964A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012413A1 (zh) * 2022-07-11 2024-01-18 华为技术有限公司 显示控制方法、电子设备及计算机可读存储介质
US12106757B1 (en) * 2023-07-14 2024-10-01 Deepak R. Chandran System and a method for extending voice commands and determining a user's location

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11721347B1 (en) * 2021-06-29 2023-08-08 Amazon Technologies, Inc. Intermediate data for inter-device speech processing
CN115170239A (zh) * 2022-07-14 2022-10-11 艾象科技(深圳)股份有限公司 一种商品定制服务系统及商品定制服务方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073293A1 (en) * 2011-09-20 2013-03-21 Lg Electronics Inc. Electronic device and method for controlling the same
CN105100403A (zh) * 2015-05-26 2015-11-25 努比亚技术有限公司 一种信息处理方法、电子设备及系统
CN107222391A (zh) * 2017-05-26 2017-09-29 北京小米移动软件有限公司 群组提醒方法、装置及设备
CN109376669A (zh) * 2018-10-30 2019-02-22 南昌努比亚技术有限公司 智能助手的控制方法、移动终端及计算机可读存储介质
CN110622126A (zh) * 2017-05-15 2019-12-27 谷歌有限责任公司 通过自动化助理来提供对用户控制资源的访问
CN110944056A (zh) * 2019-11-29 2020-03-31 深圳传音控股股份有限公司 交互方法、移动终端及可读存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4497346B2 (ja) * 2003-11-20 2010-07-07 ソニー・エリクソン・モバイルコミュニケーションズ株式会社 電子機器制御装置、電子機器制御システム
CN105652704A (zh) * 2014-12-01 2016-06-08 青岛海尔智能技术研发有限公司 一种家庭背景音乐播放控制方法
CN105791518B (zh) * 2014-12-23 2020-09-25 联想(北京)有限公司 一种信息处理方法及电子设备
CN105872746A (zh) * 2015-11-26 2016-08-17 乐视网信息技术(北京)股份有限公司 根据使用场景进行静音的方法及终端设备
US10230841B2 (en) * 2016-11-22 2019-03-12 Apple Inc. Intelligent digital assistant for declining an incoming call
CN109348068A (zh) * 2018-12-03 2019-02-15 咪咕数字传媒有限公司 一种信息处理方法、装置及存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073293A1 (en) * 2011-09-20 2013-03-21 Lg Electronics Inc. Electronic device and method for controlling the same
CN105100403A (zh) * 2015-05-26 2015-11-25 努比亚技术有限公司 一种信息处理方法、电子设备及系统
CN110622126A (zh) * 2017-05-15 2019-12-27 谷歌有限责任公司 通过自动化助理来提供对用户控制资源的访问
CN107222391A (zh) * 2017-05-26 2017-09-29 北京小米移动软件有限公司 群组提醒方法、装置及设备
CN109376669A (zh) * 2018-10-30 2019-02-22 南昌努比亚技术有限公司 智能助手的控制方法、移动终端及计算机可读存储介质
CN110944056A (zh) * 2019-11-29 2020-03-31 深圳传音控股股份有限公司 交互方法、移动终端及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4221172A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024012413A1 (zh) * 2022-07-11 2024-01-18 华为技术有限公司 显示控制方法、电子设备及计算机可读存储介质
US12106757B1 (en) * 2023-07-14 2024-10-01 Deepak R. Chandran System and a method for extending voice commands and determining a user's location

Also Published As

Publication number Publication date
EP4221172A4 (en) 2024-06-12
US20230410806A1 (en) 2023-12-21
CN114449110A (zh) 2022-05-06
CN114449110B (zh) 2023-11-03
EP4221172A1 (en) 2023-08-02

Similar Documents

Publication Publication Date Title
WO2021052263A1 (zh) 语音助手显示方法及装置
AU2019385366B2 (en) Voice control method and electronic device
WO2021129688A1 (zh) 显示方法及相关产品
US12058145B2 (en) Device control method and device
CN110910872B (zh) 语音交互方法及装置
CN110138959B (zh) 显示人机交互指令的提示的方法及电子设备
CN111650840B (zh) 智能家居场景编排方法及终端
WO2022088964A1 (zh) 一种电子设备的控制方法和装置
WO2021052282A1 (zh) 数据处理方法、蓝牙模块、电子设备与可读存储介质
CN114173204B (zh) 一种提示消息的方法、电子设备和系统
CN111819533B (zh) 一种触发电子设备执行功能的方法及电子设备
WO2021057452A1 (zh) 一种原子服务的呈现方法及装置
CN114173000B (zh) 一种回复消息的方法、电子设备和系统、存储介质
CN113496426A (zh) 一种推荐服务的方法、电子设备和系统
WO2021052139A1 (zh) 手势输入方法及电子设备
WO2023273321A1 (zh) 一种语音控制方法及电子设备
WO2022088963A1 (zh) 一种电子设备解锁方法和装置
WO2021147483A1 (zh) 数据分享的方法和装置
CN115883714B (zh) 消息回复方法及相关设备
WO2022052767A1 (zh) 一种控制设备的方法、电子设备和系统
WO2024060968A1 (zh) 管理服务卡片的方法和电子设备
WO2024114493A1 (zh) 一种人机交互的方法和装置
WO2023124829A1 (zh) 语音协同输入方法、电子设备及计算机可读存储介质
WO2022042774A1 (zh) 头像显示方法及电子设备
CN118368356A (zh) 日历日程的共享方法、电子设备和通信系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884695

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021884695

Country of ref document: EP

Effective date: 20230426

NENP Non-entry into the national phase

Ref country code: DE