CN116527807A - Stream re-broadcasting method, electronic device, storage medium and program product - Google Patents
Stream re-broadcasting method, electronic device, storage medium and program product Download PDFInfo
- Publication number
- CN116527807A CN116527807A CN202210081741.1A CN202210081741A CN116527807A CN 116527807 A CN116527807 A CN 116527807A CN 202210081741 A CN202210081741 A CN 202210081741A CN 116527807 A CN116527807 A CN 116527807A
- Authority
- CN
- China
- Prior art keywords
- interface
- electronic equipment
- information
- determining
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 110
- 238000012790 confirmation Methods 0.000 claims abstract description 6
- 238000004891 communication Methods 0.000 claims description 50
- 230000006854 communication Effects 0.000 claims description 50
- 230000006870 function Effects 0.000 claims description 46
- 238000013186 photoplethysmography Methods 0.000 claims description 25
- 230000033001 locomotion Effects 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 10
- 238000012544 monitoring process Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 7
- 238000007726 management method Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 13
- 230000005236 sound signal Effects 0.000 description 13
- 238000010295 mobile communication Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 210000000988 bone and bone Anatomy 0.000 description 9
- 230000003287 optical effect Effects 0.000 description 9
- 230000003993 interaction Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000036541 health Effects 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 229920001621 AMOLED Polymers 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000003416 augmentation Effects 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 150000002500 ions Chemical class 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000002096 quantum dot Substances 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000008280 blood Substances 0.000 description 1
- 210000004369 blood Anatomy 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000002354 daily effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000002138 osteoinductive effect Effects 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 230000010349 pulsation Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72433—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72409—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality by interfacing with external accessories
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72451—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to schedules, e.g. using calendar applications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the application provides a stream re-broadcasting method, electronic equipment, a storage medium and a program product, wherein the stream re-broadcasting method comprises the following steps: determining that the electronic equipment generates a target event; determining that the electronic equipment is connected with a first device, wherein the first device has a voice broadcasting function; determining a first interface of the electronic equipment as a reading interface, wherein the first interface is the last using interface before a target event occurs; and receiving first determination information sent by the first device, wherein the first confirmation information is information for voice broadcasting confirmed by a user through the first device, and the information on the first interface is voice broadcasting through the first device. When the user uses the electronic equipment to read the information and is interrupted, the interrupted information reading effect is realized through the continuous broadcasting of the voice device flow, and the user experience is improved.
Description
[ field of technology ]
The embodiment of the application relates to the technical field of voice broadcasting, in particular to a stream re-broadcasting method, electronic equipment, a storage medium and a program product.
[ background Art ]
The voice broadcast refers to the generation process of the text synthesized voice. When the interaction between the user and the intelligent device is impossible or inconvenient to read the screen, for example, when the user reads information by using the intelligent device, the user has the events of page jump, mobile phone screen-off and the like, and the reading process of the user is interrupted; or when the user enters a driving state and a movement state, and the user is inconvenient to continuously watch the display screen, the user is influenced to continuously read information by using the intelligent device.
Therefore, when the user uses the intelligent device to read information is interrupted, providing a smooth and non-sensitive information acquisition mode for the user is a problem to be solved.
[ invention ]
The embodiment of the application provides a stream forwarding method, electronic equipment, a storage medium and a program product, which can solve the problem that when a user uses intelligent equipment to read information, the user cannot continue to read the information and the user experience is poor.
In a first aspect, an embodiment of the present application provides a stream forwarding method, applied to an electronic device having a display screen, including: determining that the electronic equipment generates a target event; determining that the electronic equipment is connected with a first device, wherein the first device has a voice broadcasting function; determining a first interface of the electronic equipment as a reading interface, wherein the first interface of the electronic equipment is the last using interface before the electronic equipment generates a target event; and receiving first determination information sent by the first device, broadcasting information on the first interface through the first device in a voice way, wherein the first confirmation information is information for a user to confirm to carry out voice broadcasting through the first device.
In one possible implementation manner, the determining that the electronic device generates the target event includes: and determining that the electronic equipment generates a screen-off event, generates a page jump event, enters a motion state, enters a driving state or enters a riding state.
In one possible implementation manner, the determining that the electronic device generates the target event includes: acquiring current time information, and acquiring user habit information corresponding to the current time information according to pre-stored time information and data of user habit information corresponding to the time information; and if the user habit information indicates that the user enters a motion state, enters a driving state or enters a riding state, determining that the electronic equipment generates a target event.
In one possible implementation manner, the determining that the electronic device generates a screen-off event includes: when the electronic equipment is detected to be in a screen-off state, acquiring a first time length, wherein the first time length is the time length when a display screen is kept in a screen-on state before the electronic equipment is in the screen-off state; and if the first time length is greater than a preset screen-lighting time length threshold value, determining that a screen-extinguishing event occurs to the electronic equipment.
In one possible implementation manner, the determining that the electronic device is connected to the first apparatus includes: determining at least one of the following means for electrically or communicatively connecting the electronic device: earphone, car machine, intelligent audio amplifier, large-scale and intelligent wrist-watch.
In one possible implementation manner, the determining that the electronic device is connected to the first apparatus includes: detecting whether the electronic equipment is electrically connected or in communication with an earphone, if so, determining that the electronic equipment is connected with the first device; if not, detecting whether the electronic equipment is electrically or communicatively connected with a car machine, an intelligent sound box, a large screen or an intelligent watch; if yes, the electronic equipment is determined to be connected with a first device.
In one possible implementation manner, the determining that the electronic device is connected to the first apparatus includes: detecting whether the electronic equipment is electrically or communicatively connected with an earphone or not, and receiving a photoplethysmography (PPG) signal, if so, determining that the electronic equipment is connected with a first device, wherein the PPG signal is a PPG signal sent to the electronic equipment by the earphone when a user wears the earphone; if not, detecting whether the electronic equipment is electrically or communicatively connected with a car machine, an intelligent sound box, a large screen or an intelligent watch; if yes, the electronic equipment is determined to be connected with a first device.
In one possible implementation, the method further includes: if the electronic equipment is detected to be electrically connected or in communication with an earphone and the PPG signal is not detected, a first instruction is sent to the earphone, and the first instruction is used for indicating the earphone to open a PPG function.
In one possible implementation manner, when it is determined that the electronic device enters a driving state, the detecting whether the electronic device is electrically connected or communicatively connected to a car machine, a smart sound box, a large screen or a smart watch includes: detecting whether the electronic equipment is electrically connected or in communication with the car machine; if yes, determining that the electronic equipment is connected with a first device; if not, detecting whether the electronic equipment is electrically or communicatively connected with the intelligent sound box, the large screen or the intelligent watch, and if so, determining that the electronic equipment is connected with the first device.
In one possible implementation manner, the determining that the first interface of the electronic device is a reading interface includes: determining a first interface of the electronic equipment as a first type reading interface, wherein the first type reading interface is an interface of a reading type APP; or determining the first interface of the electronic equipment as a second type reading interface, wherein the second type reading interface is an interface of a chat APP.
In one possible implementation manner, the determining that the first interface is a first type of reading interface includes: acquiring first APP information, wherein the first APP information is information of an application program APP to which the first interface belongs; when the first APP information is in a pre-stored first white list, determining that the first interface is a first type reading interface, wherein the first white list is an APP information list corresponding to the first type reading interface.
In one possible implementation manner, the voice broadcasting, by the first device, the information on the first interface includes: acquiring a first continuous broadcasting position, wherein the first continuous broadcasting position is a position where a user interrupts reading in the first interface; and starting voice broadcasting of the information on the first interface from the first continuous broadcasting position through the first device.
In one possible implementation manner, the acquiring the first continuous playing position includes: acquiring a first reading speed, wherein the first reading speed is the reading speed of a user; acquiring a first residence time, wherein the first residence time is the residence time of a user on the first interface; calculating a first reading word number according to a first reading speed and a first residence time, wherein the first reading word number is the word number read by a user on the first interface; and acquiring the target characters corresponding to the first reading character number, and determining the complete sentences corresponding to the target characters as a first continuous broadcasting position.
In one possible implementation manner, the acquiring the first reading speed includes: acquiring a first time difference, wherein the first time difference is a time difference between a time stamp corresponding to a first last page turning event before a target event occurs to the electronic equipment and a time stamp corresponding to a second last page turning event before the target event occurs to the electronic equipment; acquiring a first interface word number, wherein the first interface word number is the word number corresponding to a page reading interface opened by the electronic equipment after the penultimate page turning event; and obtaining a quotient of the first interface word number and the first time difference to obtain a first reading speed.
In one possible implementation manner, the acquiring the first reading speed includes: acquiring a second time difference, wherein the second time difference is a time difference value between a time stamp corresponding to a first last sliding screen event before a target event occurs in the electronic equipment and a time stamp corresponding to a second last sliding screen event before the target event occurs in the electronic equipment; acquiring a second interface word number, wherein the second interface word number is a word number corresponding to an interface position difference between a screen sliding starting position corresponding to a first screen sliding event before a target event occurs to the electronic equipment and a screen sliding starting position corresponding to a second screen sliding event before the target event occurs to the electronic equipment; and obtaining a quotient of the second interface word number and the second time difference to obtain a first reading speed.
In one possible implementation manner, the determining that the first interface is a second type reading interface includes: acquiring second APP information, wherein the second APP information is information of an application program APP to which the first interface belongs; and when the second APP information is in a pre-stored second white list, determining that the first interface is a second type reading interface, wherein the second white list is an APP information list corresponding to the second type reading interface.
In one possible implementation manner, the voice broadcasting, by the first device, the information on the first interface includes: acquiring first continuous broadcasting content which is latest information in the first interface; and broadcasting the first continuous broadcasting content through the first device voice.
In one possible implementation manner, the acquiring the first continuous broadcast content includes: monitoring a first message control, wherein the first message control is a control for displaying chat messages in the first interface; and taking the latest information acquired in the first message control as first continuous broadcasting content.
In one possible implementation, the method further includes: and receiving first stop information, and stopping voice broadcasting of the information on the first interface, wherein the first stop information is information required by a user to stop voice broadcasting.
In a second aspect, embodiments of the present application provide an electronic device comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method provided in the first aspect.
In a third aspect, an embodiment of the present application provides a computer readable storage medium, where the computer readable storage medium includes a stored program, where when the program runs, the program controls a device in which the computer readable storage medium is located to execute the method provided in the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method provided in the first aspect.
According to the method and the device provided by the embodiment of the application, when the electronic device is determined to generate the target event, the voice broadcasting device is connected, and the last use interface before the target event is generated is the reading interface, the content of the last use interface is voice-broadcast for the user after the user agrees. When the user uses the electronic equipment to read the information and is interrupted, the interrupted information reading effect is realized through the continuous broadcasting of the voice device flow, and the user experience is improved.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present specification, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a flow chart of a flow forwarding method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for determining that an electron has occurred in a target event according to an embodiment of the present application;
fig. 4 is a flow chart of another flow forwarding method according to an embodiment of the present application;
fig. 5 is a flow chart of another flow forwarding method according to an embodiment of the present application;
fig. 6 is a flow chart of still another flow broadcasting method according to an embodiment of the present application;
fig. 7 is a flow chart of another flow forwarding method according to an embodiment of the present application;
fig. 8 is a flow chart of still another flow broadcasting method according to an embodiment of the present application;
fig. 9 is a flow chart of another flow forwarding method according to an embodiment of the present application.
[ detailed description ] of the invention
For a better understanding of the technical solutions of the present application, embodiments of the present application are described in detail below with reference to the accompanying drawings.
At least one of the embodiments of the present application includes one or more; wherein, a plurality refers to greater than or equal to two. In addition, it should be understood that in the description of this application, the words "first," "second," and the like are used merely for distinguishing between the descriptions and not for indicating or implying any relative importance or order.
The terminology used in the following embodiments is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification and the appended claims, the singular forms "a," "an," "the," and "the" are intended to include, for example, "one or more" such forms of expression, unless the context clearly indicates to the contrary. It should also be understood that in embodiments of the present application, "one or more" means one, two, or more than two; "and/or", describes an association relationship of the association object, indicating that three relationships may exist; for example, a and/or B may represent: a alone, a and B together, and B alone, wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
In the prior art, when a user reads information by using the intelligent device and is interrupted or has to be interrupted, the user cannot or is inconvenient to continue reading the information by the intelligent device, so that the user experience is poor. To solve the above problems, embodiments of the present application provide a stream re-broadcasting method, an electronic device, a storage medium, and a program product.
The method for forwarding the stream disclosed in the embodiments may be applied to various types of electronic devices, for example, mobile phones, tablet computers, notebook computers, palm computers, mobile internet devices (mobile internet device, MID), wearable devices (e.g., watches, bracelets, smart helmets, etc.), virtual Reality (VR) devices, augmented reality (augmented reality, AR) devices, ultra-mobile personal computers (ultra-mobile personal computer, UMPC), netbooks, personal digital assistants (personaldigital assistant, PDA), and the like, which are not limited in this embodiment of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, and fig. 1 may be a schematic structural diagram of an electronic device to which the stream forwarding method provided in the embodiment of the present application is applied.
As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and the like. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like. Further, when the electronic device is a mobile phone, the electronic device may further include: antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, and subscriber identity module (subscriber identification module, SIM) card interface 195, etc.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can be a neural center and a command center of the electronic device. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The I2C interface is a bi-directional synchronous serial bus comprising a serial data line (SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may contain multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively, through different I2C bus interfaces. For example: the processor 110 may be coupled to the touch sensor 180K through an I2C interface, such that the processor 110 communicates with the touch sensor 180K through an I2C bus interface to implement a touch function of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, the processor 110 may contain multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through the I2S interface, to implement a function of answering a call through the bluetooth headset.
PCM interfaces may also be used for audio communication to sample, quantize and encode analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface to implement a function of answering a call through the bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus for asynchronous communications. The bus may be a bi-directional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is typically used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit an audio signal to the wireless communication module 160 through a UART interface, to implement a function of playing music through a bluetooth headset.
The MIPI interface may be used to connect the processor 110 to peripheral devices such as a display 194, a camera 193, and the like. The MIPI interfaces include camera serial interfaces (camera serial interface, CSI), display serial interfaces (display serial interface, DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the photographing functions of electronic device 100. The processor 110 and the display 194 communicate via a DSI interface to implement the display functionality of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal or as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I15c interface, an I14S interface, a UART interface, an MIPI interface, etc.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transfer data between the electronic device 100 and a peripheral device. And can also be used for connecting with a headset, and playing audio through the headset. The interface may also be used to connect other electronic devices, such as AR devices, etc.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also use different interfacing manners, or a combination of multiple interfacing manners in the foregoing embodiments.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas may also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed into a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution for wireless communication including 2G/3G/4G/5G, etc., applied to the electronic device 100. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA), etc. The mobile communication module 150 may receive electromagnetic waves from the antenna 1, perform processes such as filtering, amplifying, and the like on the received electromagnetic waves, and transmit the processed electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 can amplify the signal modulated by the modem processor, and convert the signal into electromagnetic waves through the antenna 1 to radiate. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be provided in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low frequency baseband signal to the baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs sound signals through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional module, independent of the processor 110.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc., applied to the electronic device 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
In some embodiments, antenna 1 and mobile communication module 150 of electronic device 100 are coupled, and antenna 2 and wireless communication module 160 are coupled, such that electronic device 100 may communicate with a network and other devices through wireless communication techniques. The wireless communication techniques may include the Global System for Mobile communications (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM, and/or IR techniques, among others. The GNSS may include a global satellite positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou satellite navigation system (BeiDou Navigation Satellite System, BDS), a quasi zenith satellite system (quasi-zenith satellite system, QZSS) and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 implements display functions through a GPU, a display screen 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
The display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (FLED), a Mini Led, a Micro-o Led, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
A series of graphical user interfaces (graphical user interface, GUIs) may be displayed on the display 194 of the electronic device, all of which are home screens of the electronic device. Generally, the size of the display 194 of an electronic device is fixed and only limited controls can be displayed in the display 194 of the electronic device. A control is a GUI element that is a software component contained within an application program that controls all data processed by the application program and interactive operations on that data, and a user can interact with the control by direct manipulation (direct manipulation) to read or edit information about the application program. In general, controls may include visual interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, widgets, and the like. For example, in embodiments of the present application, the display 194 may display virtual keys.
The electronic device 100 may implement photographing functions through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or a portion of the functional modules of the audio module 170 may be disposed in the processor 110.
The speaker 170A, also referred to as a "horn," is used to convert audio electrical signals into sound signals. The electronic device 100 may listen to music, or to hands-free conversations, through the speaker 170A.
A receiver 170B, also referred to as a "earpiece", is used to convert the audio electrical signal into a sound signal. When electronic device 100 is answering a telephone call or voice message, voice may be received by placing receiver 170B in close proximity to the human ear.
Microphone 170C, also referred to as a "microphone" or "microphone", is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can sound near the microphone 170C through the mouth, inputting a sound signal to the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, and may implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to enable collection of sound signals, noise reduction, identification of sound sources, directional recording functions, etc.
The earphone interface 170D is used to connect a wired earphone. The headset interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunicat ions industry associat ion of the USA, CTIA) standard interface.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the touch operation intensity according to the pressure sensor 180A. The electronic device 100 may also calculate the location of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine a motion gesture of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., x, y, and z axes) may be determined by gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the electronic device 100 through the reverse motion, so as to realize anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude from barometric pressure values measured by barometric pressure sensor 180C, aiding in positioning and navigation.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip cover using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip machine, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the detected opening and closing state of the leather sheath or the opening and closing state of the flip, the characteristics of automatic unlocking of the flip and the like are set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity may be detected when the electronic device 100 is stationary. The electronic equipment gesture recognition method can also be used for recognizing the gesture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, the electronic device 100 may range using the distance sensor 180F to achieve quick focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light outward through the light emitting diode. The electronic device 100 detects infrared reflected light from nearby objects using a photodiode. When sufficient reflected light is detected, it may be determined that there is an object in the vicinity of the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object in the vicinity of the electronic device 100. The electronic device 100 can detect that the user holds the electronic device 100 close to the ear by using the proximity light sensor 180G, so as to automatically extinguish the screen for the purpose of saving power. The proximity light sensor 180G may also be used in holster mode, pocket mode to automatically unlock and lock the screen.
The ambient light sensor 180L is used to sense ambient light level. The electronic device 100 may adaptively adjust the brightness of the display 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust white balance when taking a photograph. Ambient light sensor 180L may also cooperate with proximity light sensor 180G to detect whether electronic device 100 is in a pocket to prevent false touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 may utilize the collected fingerprint feature to unlock the fingerprint, access the application lock, photograph the fingerprint, answer the incoming call, etc.
The temperature sensor 180J is for detecting temperature. In some embodiments, the electronic device 100 performs a temperature processing strategy using the temperature detected by the temperature sensor 180J. For example, when the temperature reported by temperature sensor 180J exceeds a threshold, electronic device 100 performs a reduction in the performance of a processor located in the vicinity of temperature sensor 180J in order to reduce power consumption to implement thermal protection. In other embodiments, when the temperature is below another threshold, the electronic device 100 heats the battery 142 to avoid the low temperature causing the electronic device 100 to be abnormally shut down. In other embodiments, when the temperature is below a further threshold, the electronic device 100 performs boosting of the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperatures.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, bone conduction sensor 180M may acquire a vibration signal of a human vocal tract vibrating bone pieces. The bone conduction sensor 180M may also contact the pulse of the human body to receive the blood pressure pulsation signal. In some embodiments, bone conduction sensor 180M may also be provided in a headset, in combination with an osteoinductive headset. The audio module 170 may analyze the voice signal based on the vibration signal of the sound portion vibration bone block obtained by the bone conduction sensor 180M, so as to implement a voice function. The application processor may analyze the heart rate information based on the blood pressure beat signal acquired by the bone conduction sensor 180M, so as to implement a heart rate detection function. The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The electronic device 100 may receive key inputs, generating key signal inputs related to user settings and function controls of the electronic device 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration alerting as well as for touch vibration feedback. For example, touch operations acting on different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also correspond to different vibration feedback effects by touching different areas of the display screen 194. Different application scenarios (such as time reminding, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195, or removed from the SIM card interface 195 to enable contact and separation with the electronic device 100. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to realize functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, i.e.: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
In addition, an operating system is run on the components. Such as the iOS operating system developed by apple corporation, the Android operating system developed by google corporation, the Windows operating system developed by microsoft corporation, etc. An operating application may be installed on the operating system.
The electronic device according to the embodiment of the present application may be installed with an iOS operating system, an Android operating system, or a Windows operating system, or may be installed with other operating systems, which is not limited in the embodiment of the present application.
It should be noted that, in the embodiments of the present application, the top end, the bottom end, the left end, the right end, and the upper and lower sides are opposite, and are exemplary descriptions in specific implementations, and should not be limited to the embodiments of the present application.
For easy understanding, the following embodiments of the present application will take an electronic device having a structure shown in fig. 1 as an example, and specifically describe a stream broadcasting method provided by the embodiments of the present application in conjunction with the accompanying drawings and application scenarios.
Users often read information on an electronic device through its display screen, such as on an electronic device, by a novice application APP, by a news APP or browser APP, by Or->Chat with friends. When the user looks at the display screen for a long time to cause discomfort in eyes and stop the screen, or the user needs to call out health codes and payment codes, or the user enters a sport mode, a driving mode and a riding mode, the user cannot read information conveniently or continuously by using the display screen, and the information currently being read is forcedly interrupted, so that the user experience is influenced.
Therefore, the embodiment of the application provides a stream forwarding method, which provides a smooth and noninductive voice broadcasting mode for a user when the user reads information and is interrupted, and improves user experience.
Referring to fig. 2, fig. 2 is a flow chart of a method for forwarding a stream according to an embodiment of the present application, as shown in fig. 2, where the method is applied to an electronic device having a display screen, and includes:
step 101: and determining that the electronic equipment generates a target event.
It should be noted that, when the user reads information through the electronic device, the electronic device detects whether a target event occurs, where the target event may include a screen-off event, a page jump event, a movement state, a driving state, or a riding state, and the target event may be other events, so long as the event that affects the user to continue reading information on the electronic device can be detected by the electronic device, and the embodiment does not limit the form of the target event.
Referring to fig. 3, fig. 3 is a schematic diagram of a manner of determining that an electronic device generates a target event according to an embodiment of the present application, and as shown in fig. 3, determining that the electronic device generates the target event may be: detecting that the electronic equipment generates a screen-off event, detecting that the electronic equipment generates a page jump event, or detecting that the electronic equipment generates a movement state, a driving state or a riding state.
The electronic device may detect the occurrence of a screen-off event, a page jump event, a movement state, a driving state, or a riding state based on a variety of ways. The method provided by the embodiment of the application can be as follows: the method comprises the steps of presetting events and/or states to be monitored in electronic equipment, establishing a manager in the background of the electronic equipment, continuously calling and monitoring the events and the states generated in the background of the electronic equipment through the manager, and determining that target events occur if the preset events and states are detected, wherein the process of calling the events and monitoring the events is similar for different target events. Various scenarios for determining occurrence of a target event by an electronic device are described below by way of example.
Scene one: and detecting that the electronic equipment generates a screen-off event.
For example, the manner of detecting that the electronic device has a screen-off event may be: when a user operates a switch button to stop a screen, a screen stopping event occurs in the background of the electronic equipment, and the screen stopping event of the electronic equipment can be determined by monitoring the screen stopping event in the background. Taking a mobile phone of an android system as an example, a broadcastReceicer in a mobile phone background software layer screen can monitor a screen off event, and when activity is detected: and when the action_screen_off is performed, determining that the current electronic equipment generates a screen-off event.
In addition, other manners of detecting the occurrence of the screen-off event of the electronic device may be adopted, and the manner of determining that the electronic device generates the screen-off event is not limited in this embodiment.
Further, because the electronic device has a large number of screen-off times every day, if the screen-off of the electronic device is detected as a target event every time, a problem of high power consumption of the electronic device may be caused. The scene of the user reading the rest screen may be: when the user watches the screen for a long time to cause eye discomfort, the user performs screen-extinguishing operation on the electronic equipment. Therefore, in order to determine that the user may be reading before the screen is closed, a condition for judging the time period of the bright screen before the screen is closed may be added.
For example, when determining that the electronic device generates the screen-off event, the method may include: when the electronic equipment is detected to be in a screen-off state, acquiring a first time length, wherein the first time length is the time length when a display screen is kept in a screen-on state before the electronic equipment is in the screen-off state; and if the first time length is greater than a preset screen-lighting time length threshold value, determining that a screen-extinguishing event occurs to the electronic equipment.
It can be understood that the background of the electronic device records the events of screen-off and screen-on each time, including the occurrence time, so that the duration of the screen-on state of the display screen can be obtained according to the current time of screen-off and the last time of screen-on. And when the screen-off duration is detected, the duration of continuous screen-off of the display screen before screen-off is acquired, and when the duration of continuous screen-off duration is longer than a preset screen-off duration threshold, the screen-off event of the electronic equipment is determined, so that the power consumption of the electronic equipment is further reduced.
Scene II: and detecting that the electronic equipment generates a page jump event.
The way of detecting the page jump event of the electronic device can be divided into two ways, one way is that the user clicks the advertisement link of the page to jump the page; the other is that the user has invoked a trip code, wallet-like application (possibly traffic or payment bank card inside), health code, etc. to make a page jump. The detection mode of the electronic device is similar, and the detection mode can be that the background of the electronic device monitors click and jump events.
For example, taking a mobile phone of an android system as an example, the mobile phone presets events and states to be monitored, when a user clicks an advertisement link to generate page skip, the mobile phone background windows manager global acquires WindowManagerService, windowManagerService in a getInstance manner to monitor mWindowSession, mWindowSession skip events including clicking and link of the user, and when windows manager service detects events identical to or corresponding to the preset events, it is determined that the electronic device generates page skip events.
For another example, when the user invokes information such as a trip code or a health code, and the mobile phone background windows manager global acquires WindowManagerService, windowManagerService in a getInstance manner, mwindowselection and mWindowState are monitored, and when windows manager service detects that mwindowselection and mWindowState contain information such as a health code, a trip code, and the like, it is determined that a page skip event occurs in the electronic device.
In addition, other ways of detecting the occurrence of the page jump event of the electronic device may be adopted, and the method for determining that the page jump event occurs in the electronic device is not limited in this embodiment.
Scene III: the electronic equipment is detected to enter a driving state, a riding state or a movement state, and the electronic equipment is inconvenient to operate by a user after entering the state, so that the electronic equipment belongs to an event of inconvenient reading.
For example, detecting that the electronic device is inconvenient to read may be that the electronic device detects that the electronic device enters a driving state, and detecting that the electronic device enters the driving state may be divided into three cases: in the first case, when a user opens a navigation APP to start driving navigation, a driving navigation event is recorded in the background of the electronic equipment, and when the driving navigation event is detected in the background of the electronic equipment, the electronic equipment can be determined to enter a driving state. In the second case, the user sets a driving mode on the electronic device, and when the background of the electronic device detects the driving mode state, the electronic device can be determined to enter the driving state. In the third case, the electronic device is connected to the vehicle machine, and can monitor through the connection manager ConnectivityManager, and when the connection of the vehicle machine device is detected, the electronic device can be determined to enter the driving state.
For example, the detection of an entry into a motion state by an electronic device can be divided into two cases: the method comprises the steps that in the first case, a user sets a movement mode entering on the electronic equipment, when the background of the electronic equipment detects a movement mode state, the electronic equipment is determined to enter the movement state, in the second case, a smart watch, a bracelet and the like which are worn by the user and connected with the electronic equipment detect the movement state or a movement event through a PPG sensor, blood oxygen, heart rate detection and the like, and when the background of the electronic equipment detects the movement state or the movement event, the electronic equipment is determined to enter the movement state.
It will be appreciated that similar methods are also used to detect the user entering the riding state, and will not be described in detail herein. In addition, other ways of detecting the inconvenient reading event of the electronic device may be adopted, and the method for determining the inconvenient reading event of the electronic device is not limited in this embodiment.
In one possible implementation, the electronic device may determine that the target event occurred based on pre-stored data.
For example, determining that the electronic device has a target event may include the steps of: (1) Acquiring current time information, and acquiring user habit information corresponding to the current time information according to pre-stored time information and data of user habit information corresponding to the time information; (2) And if the user habit information indicates that the user enters a motion state, enters a driving state or enters a riding state, determining that the electronic equipment generates a target event.
When the electronic device has a user habit analysis function, a user starts the function, and the electronic device analyzes daily habit data of the user, generates data of user habit information corresponding to time information and stores the data. Based on the stored time information and the data of the user habit information corresponding to the time information, the user can be supposedly determined that the user is likely to have inconvenient reading events in the current time period.
For example, the step of determining that the electronic device has a target event may include: acquiring current time information, and acquiring user habit information corresponding to the current time information according to pre-stored time information and data of user habit information corresponding to the time information; and if the user habit information indicates that the user enters a motion state, a driving state or a riding state, determining that the electronic equipment generates a target event. According to the user habit data pre-stored in the electronic equipment, the occurrence of the target event of the electronic equipment is determined, the user requirement can be more intelligently found, and the user experience is improved.
Step 102: and determining that the electronic equipment is connected with a first device, wherein the first device has a voice broadcasting function.
It should be noted that, after determining that the electronic device has a target event, whether the electronic device is connected to the first device with the voice broadcast function is continuously detected. Because the interrupted reading information of the user needs to be voice broadcast through the first device, the first device needs to be connected with the electronic equipment and can play voice.
The first device may be a plurality of types of devices having a voice broadcast function, and the connection of the first device may be determined based on a plurality of ways. By way of example, several possible ways of determining the connection of the electronic device to the first apparatus are given below.
Referring to fig. 4, fig. 4 is a schematic flow chart of another method for forwarding streams provided in an embodiment of the present application, and in the embodiment shown in fig. 2, the step 102 may include:
step 201: determining at least one of the following means for electrically or communicatively connecting the electronic device: earphone, car machine, intelligent audio amplifier, large-scale and intelligent wrist-watch.
It should be noted that, when it is determined that the electronic device has a target event, it is required to determine that the electronic device is connected to a device having a voice broadcast function. The earphone, the car machine, the intelligent sound box, the large screen and the intelligent watch are common devices with a voice broadcasting function, and can be other types of devices, such as a speaker of the electronic equipment, and a user can set the speaker of the electronic equipment for voice broadcasting. The electrical connection mode may include a mode that a connector of the first device is inserted into a corresponding interface of the electronic device, or a mode that the first device is connected with a circuit of the electronic device, and the communication connection mode may include a mode that the first device establishes connection with the electronic device through WiFi, bluetooth or other communication modes. Therefore, the manner in which the first device is electrically and communicatively connected to the electronic device is not limited in the embodiments of the present application.
Illustratively, determining whether the electronic device is connected to the first apparatus may be accomplished by detecting electronic device background data. Taking a first device connected with a mobile phone WiFi of an android system as an example, a mobile phone background monitors a DnsDisconnector event through a connectitymanager to determine whether the first device is electrically connected with electronic equipment, and in a better mode, whether other devices which are connected with the electronic equipment and are in Bluetooth or WiFi connection with a user account, such as a car phone, an intelligent sound box, a large screen and the like, are needed to be detected.
It can be understood that the voice broadcasting is performed through the speaker of the electronic equipment, which is a more convenient mode, and the voice broadcasting is performed through devices such as an earphone, a car machine, an intelligent sound box, a large screen or an intelligent watch, which is a mode that the electronic equipment and other devices work cooperatively. In the embodiment of the application, as a plurality of voice devices capable of continuously broadcasting are arranged, the voice broadcasting function is easier to realize, and meanwhile, the condition that the electronic equipment is connected with the first device with the voice broadcasting function is provided for implementing stream broadcasting is determined.
In one possible implementation, in determining that the electronic device is electrically or communicatively connected, at least one of: when earphone, car machine, intelligent audio amplifier, large-size screen and intelligent wrist-watch, there is the priority order that detects to different devices.
Illustratively, determining that the electronic device is electrically or communicatively connected to at least one of: the steps of earphone, car machine, intelligent audio amplifier, large screen and intelligent wrist-watch can include: (1) Firstly, detecting whether the electronic equipment is electrically connected or in communication with an earphone, and if so, determining that the electronic equipment is connected with a voice broadcasting device; (2) If the electronic equipment is not detected to be electrically connected or in communication with the earphone, detecting that the electronic equipment is electrically connected or in communication with the car set, the intelligent sound box, the large screen or the intelligent watch, and if yes, determining that the electronic equipment is connected with the voice broadcasting device.
It should be noted that, since the earphone is a voice broadcasting device directly adjacent to the ear of the user, the privacy is the best, so that voice broadcasting of the information read by the user is a better way. When the electronic equipment is determined to be connected with the first device, the electronic equipment firstly detects whether the earphone is connected or not, and if the electronic equipment is not connected with the earphone, the electronic equipment is detected whether the electronic equipment is connected with other devices capable of voice broadcasting or not.
Further, in some specific scenarios, after determining that the electronic device is not connected to the earphone, when determining that there is or is not connected to another voice broadcasting device, there is also a priority of detection.
For example, when it is determined that the electronic apparatus enters a driving state, the step of determining that the electronic apparatus is connected to the first device includes: (1) Detecting whether the electronic equipment is electrically connected or in communication with an earphone, if so, determining that the electronic equipment is connected with the first device; (2) If the electronic equipment is not detected to be electrically connected or in communication with the earphone, detecting that the electronic equipment is electrically connected or in communication with the car machine, and if yes, determining that the electronic equipment is connected with a first device; (3) If the electronic equipment is not detected to be electrically connected or in communication with the car machine, whether the electronic equipment is electrically connected or in communication with an intelligent sound box, a large screen or an intelligent watch is detected, and if yes, the electronic equipment is determined to be connected with a first device.
It can be understood that when the user enters the driving state, if the electronic device is not connected with the earphone, the possibility that the electronic device is connected with the car machine is high, so that whether the electronic device is connected with the car machine or not is preferentially detected, a voice broadcasting device can be determined more quickly, user experience is improved, and power consumption of the electronic device is reduced.
In one possible implementation, in determining that the electronic device is electrically or communicatively connected, at least one of: when earphone, car machine, intelligent audio amplifier, large-size screen and intelligent wrist-watch, not only to the detected priority of different devices to some voice broadcast device still need confirm that the user has worn this voice broadcast device.
Illustratively, determining that the electronic device is electrically or communicatively connected to at least one of: the steps of earphone, car machine, intelligent audio amplifier, large screen and intelligent wrist-watch can include: (1) Detecting whether the electronic equipment is electrically or communicatively connected with an earphone or not, and receiving a photoplethysmography (PPG) signal, if so, determining that the electronic equipment is connected with a first device; (2) If the electronic equipment is not detected to be electrically or communicatively connected with the earphone or the PPG signal is not detected, detecting whether the electronic equipment is electrically or communicatively connected with a car machine, an intelligent sound box, a large screen or an intelligent watch; and if the electronic equipment is detected to be electrically connected or in communication with the car machine, the intelligent sound box, the large screen or the intelligent watch, determining that the electronic equipment is connected with a first device.
When it is determined that the electronic device is connected to the first apparatus, it is determined that the electronic device is connected to the earphone, and earphone wearing detection is performed, and when it is determined that the user wears the earphone, it is determined that the electronic device is connected to the earphone. If the electronic equipment is not connected with the earphone, whether the electronic equipment is connected with a car machine, an intelligent sound box, a large screen or an intelligent watch or the like is detected.
The mode that the electronic equipment carries out earphone and wears the detection is whether to detect and receive the PPG signal that the earphone sent, and the PPG signal is used for confirming that earphone and skin have the contact, and when the user wears the earphone, earphone and user's skin contact, earphone generate the PPG signal and give electronic equipment with PPG signal transmission, and electronic equipment can confirm whether the user wears the earphone according to whether receiving the PPG signal. Whether the user wears the earphone is detected, whether the earphone is a usable voice broadcasting device or not can be accurately determined, and the accuracy of voice broadcasting is guaranteed.
Further, when it is detected that the electronic device is connected to the earphone, but the electronic device does not receive the PPG signal fed back by the earphone when the user wears the earphone, the earphone can be reminded to send the PPG signal.
Illustratively, determining that the electronic device is electrically or communicatively connected to at least one of: earphone, car machine, intelligent audio amplifier, large-size screen and intelligent wrist-watch can also include the step: if the electronic equipment is detected to be electrically connected or in communication with an earphone and the PPG signal is not detected, a first instruction is sent to the earphone, and the first instruction is used for indicating the earphone to open a PPG function.
It should be noted that, when the headset does not transmit the PPG signal to the electronic device, and possibly the headset is temporarily turned off, the electronic device may send a first instruction to the headset to instruct the headset to turn on the PPG function, so that the electronic device can obtain the PPG signal transmitted by the headset, and determine whether the user wears the headset. It can be appreciated that, after the electronic device receives the PPG signal, a second instruction may also be sent to the earphone to instruct the earphone to turn off the PPG function, so as to reduce power consumption of the earphone.
Step 103: and determining the first interface of the electronic equipment as a reading interface, wherein the first interface of the electronic equipment is the last using interface before the electronic equipment generates a target event.
It should be noted that, after determining that the electronic device is connected to the first apparatus with the broadcasting function, the method continuously detects whether the last interface used by the electronic device before the occurrence of the target event is a reading interface, where the reading interface refers to an interface for users to read mainly text information, and the reading interface may include: the types of the reading interface are not limited in this embodiment because of the plurality of types of the reading interface, such as a novel interface and a chat interface. When a user reads information by using the electronic device, the electronic device caches the information of the use interface, so that the interface can be determined to be a reading interface according to the cached information of the last use interface.
In some embodiments, the first interface of the electronic device is determined to be a reading interface, and may be an interface for determining that the first interface is a reading APP.
Referring to fig. 5, fig. 5 is a schematic flow chart of another method for forwarding streams according to an embodiment of the present application, and in the embodiment shown in fig. 2, the step 103 may include:
step 301: determining a first interface of the electronic equipment as a first type reading interface, wherein the first type reading interface is an interface of a reading type APP.
It should be noted that, the reading interfaces may be divided into multiple types, where the first type of reading interface corresponds to an interface of the reading APP, and an interface of the APP such as a novel, news, article, etc. may be referred to as a first type of reading interface, and may also be referred to as a long text reading interface. For example, the interface of a novel APP or headline APP can be used as the first type of reading interface. And determining the type of the reading interface, and providing basic information for voice broadcasting information on the first interface.
Further, there are various methods for determining that the first interface of the electronic device is the first type of reading interface, for example, the first interface may be determined through a pre-stored white list, or determined through a control layout analysis method.
In one possible implementation, the step of determining that the first interface of the electronic device is a first type of reading interface may include: (1) Acquiring first APP information, wherein the first APP information is information of an application program APP to which the first interface belongs; (2) when the first APP information is in a pre-stored first white list, determining that the first interface is a first type reading interface, wherein the first white list is an APP information list corresponding to the first type reading interface.
It should be noted that, determining the type of the first interface according to the information of the APP to which the first interface belongs is a relatively convenient and accurate way. The electronic device may pre-store a first white list, where the first white list is an APP information list corresponding to the first type reading interface. The electronic equipment acquires the information of the APP to which the first interface belongs from the cached information packet of the first interface, compares the information of the APP to which the first interface belongs with a first white list, and determines the first interface to be a first type reading interface if the first APP information is in the first white list stored in advance. In addition, the first white list can be updated in time to ensure the accuracy of the judgment of the first interface type. For example, the name information of the APP to which the first interface belongs is acquired as headline news, and headline news is found in the first whitelist, and then the first interface is determined to be a first type reading interface.
In some cases, the type of the first interface cannot be determined according to the information of the APP to which the first interface belongs, for example, when the first interface belongs to the browser APP, the type of the first interface cannot be determined directly according to the browser APP, and other methods can be adopted for determining.
In one possible implementation, the electronic device may determine that the first interface is a first type of reading interface by using a control layout analysis method, and the step of analyzing may include: the method comprises the steps of (1) obtaining a cached information package of a first interface, (2) obtaining the total size of a Chinese control in the first interface and the total size of the first interface from the information package, (3) considering the first interface as a first type reading interface if the ratio of the total size of the Chinese control to the total size of the first interface is larger than a preset size threshold.
In one possible implementation, the electronic device may further determine that the first interface is a first type of reading interface by using a page word count method, and the step of analyzing may include: the method comprises the steps of (1) obtaining a cached information package of a first interface, (2) obtaining the total size of a character control in the first interface and the total size of the first interface from the information package, (3) obtaining the total size of a picture control in the first interface if the ratio of the total size of the character control to the total size of the first interface is smaller than a preset size threshold, (4) identifying the number of characters contained in the picture control if the total size of the picture control and the total size of the first interface are larger than the preset size threshold, and (5) determining the first interface as a first type reading interface if the number of characters contained in the picture control is larger than the preset number of characters threshold.
Therefore, the method for determining that the first interface is the first type reading interface by adopting the page word count statistical method can be applied to redetermining on the basis that the method cannot be determined by the control layout analysis method, and some reading interfaces showing the word contents through the picture control are analyzed. It can be appreciated that other manners of determining whether the first interface is the first type reading interface may be adopted, and the embodiment of the present application does not limit the manner of determining whether the first interface is the first type reading interface.
In some embodiments, the first interface of the electronic device is determined to be a reading interface, and may be an interface for determining that the first interface is a chat-like APP.
Referring to fig. 6, fig. 6 is a schematic flow chart of another method for forwarding streams according to an embodiment of the present application, and in the embodiment shown in fig. 2, the step 103 may include:
step 401: determining that a first interface of the electronic equipment is a second type reading interface, wherein the second type reading interface is an interface of a chat APP.
It should be noted that the reading interfaces can be divided into multiple types, wherein the second type of reading interface corresponds to the chat APP interface, and the reading interfaces can be divided into multiple typesThe interfaces of the apps are referred to as second type reading interfaces, and may also be referred to as short text reading interfaces. Such as- >Or->Can be used as the second type of reading interface. Determining the type of the reading interface and providing a basic message for the following voice broadcasting of the information on the first interfaceAnd (5) extinguishing.
Further, there are various methods for determining that the first interface of the electronic device is the second type of reading interface, for example, it may be determined by a pre-stored white list.
For example, the step of determining that the first interface of the electronic device is a second type of reading interface may include: (1) Acquiring second APP information, wherein the second APP information is information of an application program APP to which the first interface belongs; (2) And when the second APP information is in a pre-stored second white list, determining that the first interface is a second type reading interface, wherein the second white list is an APP information list corresponding to the second type reading interface.
It should be noted that, determining the type of the first interface according to the information of the APP to which the first interface belongs is a relatively convenient and accurate way. The electronic device may pre-store a second white list, where the second white list is an APP information list corresponding to the second type reading interface. The electronic equipment acquires the information of the APP to which the first interface belongs from the cached information packet of the first interface, compares the information of the APP to which the first interface belongs with a second white list, and determines that the first interface is a second type reading interface if the second APP information is in the second white list stored in advance. In addition, the second white list can be updated in time so as to ensure the accuracy of the judgment of the first interface type. For example, the name information of the APP to which the first interface belongs is acquired as a WeChat, and if the WeChat is found in the second white list, then the first interface is determined to be a first type reading interface.
It can be appreciated that other manners of determining whether the first interface is the second type reading interface may be adopted, and the embodiment of the present application does not limit the manner of determining whether the first interface is the second type reading interface.
Step 104: and receiving first determination information sent by the first device, broadcasting information on the first interface through the first device in a voice way, wherein the first confirmation information is information for a user to confirm to carry out voice broadcasting through the first device.
It should be noted that, after the electronic device determines that the above three conditions are met, the electronic device already has the voice broadcasting condition, and then needs to sign the user's consent, so the electronic device may initiate, through the first device, a query about whether to continue listening/playing, for example, may query, through voice, whether to continue listening to the just read information, and the user may consent or reject through interaction with the first device, without further manual operation of the electronic device. For example, the manner of user feedback agreeing or not includes, but is not limited to, a single click, a double click, a voice interaction, a gesture interaction, and the like interaction manner on the first device. After the electronic equipment obtains the first confirmation information for confirming the voice broadcasting of the user through the first device, the information on the last use interface can be broadcasted through the first device through voice. The electronic equipment can broadcast the information on the last use interface through voice, and the information on the last use interface can be broadcast by calling a visual and audio function module of the electronic equipment or calling a text-to-voice control.
It can be understood that, by adopting the stream re-broadcasting method provided by the embodiment of the application, when the user uses the electronic equipment to read information and is interrupted, the interrupted read information is re-broadcast through the voice device stream, so that the later reading of the user is changed into immediate listening, and the user experience is improved.
In one possible implementation, after determining that the first interface is a first type of reading interface, the information on the first interface may be voice-reported by the first device.
Referring to fig. 7, fig. 7 is a schematic flow chart of another method for forwarding streams provided in an embodiment of the present application, and in the embodiment shown in fig. 5, the step 104 may include:
step 302: receiving first determination information sent by the first device, and acquiring a first continuous broadcasting position, wherein the first continuous broadcasting position is a position where a user breaks reading in the first interface;
step 303: and starting voice broadcasting of the information on the first interface from the first continuous broadcasting position through the first device.
It should be noted that, after the first interface is determined to be the first type reading interface, a simple voice broadcasting manner is to start voice broadcasting from the first text of the first interface, but the first text is often not the position where the user interrupts reading, so that in order to improve the user experience, the position where the user interrupts reading in the first interface can be obtained, and continuous broadcasting is started from the position where the user interrupts reading.
Further, there are many methods for obtaining the first resume position, and three methods are mainly described below.
The method comprises the following steps: and acquiring the position of the user interrupting reading in the first interface, and calculating according to background data generated by user operation during user reading. For example, the step of obtaining the first resume position may include: (1) Acquiring a first reading speed, wherein the first reading speed is the reading speed of a user; (2) Acquiring a first residence time, wherein the first residence time is the residence time of a user on the first interface; (3) Calculating a first reading word number according to a first reading speed and a first residence time, wherein the first reading word number is the word number read by a user on the first interface; (4) And acquiring the target characters corresponding to the first reading character number, and determining the complete sentences corresponding to the target characters as a first continuous broadcasting position.
It should be noted that, by obtaining the reading speed of the user and the residence time on the first interface, the number of words read by the user on the first interface is obtained, the corresponding target words are determined according to the sentences corresponding to the number of words read, and the complete sentences corresponding to the target words are used as the position where the user interrupts reading to start continuous broadcasting.
It can be understood that in some scenarios, the previous or next complete sentence of the complete sentence corresponding to the target text may also be used as the resume position. The determining of the corresponding complete sentence according to the target text can be achieved by extracting symbols in text information, for example, after the target text corresponding to the number of read words is found, two periods before the target text are identified, and the previous complete sentence of the complete sentence corresponding to the target text can be determined.
The method for obtaining the reading speed of the user can be different according to different page layout modes and different reading habits of the user, and the method is introduced through different scenes.
Scene one: if the user performs the interface page turning switching by sliding the screen left and right or clicking the page turning key during reading, the user reading speed may be calculated according to the time difference between two page turning and the word number of the corresponding interface, and the steps may include: (1) Acquiring a first time difference, wherein the first time difference is a time difference between a time stamp corresponding to a first last page turning event before a target event occurs to the electronic equipment and a time stamp corresponding to a second last page turning event before the target event occurs to the electronic equipment; (2) Acquiring a first interface word number, wherein the first interface word number is the word number corresponding to a page reading interface opened by the electronic equipment after the penultimate page turning event; (3) And obtaining a quotient of the first interface word number and the first time difference to obtain a first reading speed.
Scene II: if the user performs the interface position movement in the up-down screen sliding manner during reading, the user reading speed can be calculated according to the time difference between the two screen sliding and the word number corresponding to the interface position difference, and the steps can include: (1) Acquiring a second time difference, wherein the second time difference is a time difference value between a time stamp corresponding to a first last sliding screen event before a target event occurs in the electronic equipment and a time stamp corresponding to a second last sliding screen event before the target event occurs in the electronic equipment; (2) Acquiring a second interface word number, wherein the second interface word number is a word number corresponding to an interface position difference between a screen sliding starting position corresponding to a first screen sliding event before a target event occurs to the electronic equipment and a screen sliding starting position corresponding to a second screen sliding event before the target event occurs to the electronic equipment; (3) And obtaining a quotient of the second interface word number and the second time difference to obtain a first reading speed.
It can be understood that the method for obtaining the reading speed of the user is not limited to the above two methods, and in different scenarios, the method for obtaining the reading speed of the user is not limited in the embodiments of the present application, and may be calculated according to the background data generated during reading of the user.
The second method is as follows: the electronic equipment has a user habit analysis function, and when the user habit analysis function is started, the electronic equipment can collect usual reading habit data of the user, and generates and stores the reading speed data of the user according to records such as page turning, reading content switching and the like of the user. Thus, the step of obtaining the first resume position may comprise: (1) Acquiring a first reading speed, wherein the first reading speed is a user reading speed stored in electronic equipment; (2) Acquiring a first residence time, wherein the first residence time is the residence time of a user on the first interface; (3) Calculating a first reading word number according to a first reading speed and a first residence time, wherein the first reading word number is the word number read by a user on the first interface; (4) And acquiring the target characters corresponding to the first reading character number, and determining the complete sentences corresponding to the target characters as a first continuous broadcasting position.
And a third method: the electronic equipment has a camera sight tracking function, and when the camera sight tracking function is started, the electronic equipment can track the sight of a user during reading. Thus, the step of obtaining the first resume position may comprise: (1) The last sight position of the user on the first interface before the target event occurs in the electronic equipment is called; (2) And taking a complete sentence closest to the last sight line position as a first continuous broadcasting position.
It can be appreciated that the method for determining the position of the interruption of reading by the user in the first interface is not limited to the above three methods, and thus, the method for obtaining the first resume position in the embodiment of the present application is not limited.
In one possible implementation, after determining that the first interface is the second type of reading interface, the information on the first interface may be voice-reported by the first device.
Referring to fig. 8, fig. 8 is a flow chart of another flow re-broadcasting method provided in the embodiment of the present application, and in the embodiment shown in fig. 6, the step 104 may include:
step 402: receiving first determination information sent by the first device, and acquiring first continuous broadcasting content, wherein the first continuous broadcasting content is latest message information in the first interface;
step 403: and broadcasting the first continuous broadcasting content through the first device voice.
It should be noted that, after determining that the first interface is the second type reading interface, the latest information in the first interface may be obtained as the continuous broadcast content, and the latest information is broadcast by the first device through voice. The voice broadcasting time can call a text-to-voice function module in the auxiliary mode of the electronic equipment to broadcast the latest replied information of the opposite side.
Further, because the chat interface is provided with a message control, the method for acquiring the first continuous playing content may include: (1) Monitoring a first message control, wherein the first message control is a control for displaying chat messages in the first interface; (2) And taking the latest information acquired in the first message control as first continuous broadcasting content.
It should be noted that the latest message data is displayed in the message control, and each message data is independent data, so that the message control is monitored, and when the latest message data exists in the message control, voice broadcasting is performed, so that a user can conveniently obtain the latest chat message, and user experience is improved.
In some embodiments, when the user needs to stop the voice broadcast, the user may send out the information to stop the voice broadcast.
Referring to fig. 9, fig. 9 is a schematic flow chart of another method for forwarding streams according to an embodiment of the present application, and in the embodiment shown in fig. 2, the method may further include:
step 105: and receiving first stop information, and stopping voice broadcasting of the information on the first interface, wherein the first stop information is information required by a user to stop voice broadcasting.
It should be noted that, the user may stop the voice broadcast by interacting with the first device, without manually operating the electronic device. For example, the manner of user feedback agreeing or not includes, but is not limited to, a single click, a double click, a voice interaction, a gesture interaction, and the like interaction manner on the first device.
The embodiment of the application also provides a computer storage medium, wherein the computer storage medium can store a program, and the program controls a device where the computer readable storage medium is located to execute part or all of the steps in the above embodiment when running. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random access memory (random access memory, RAM), or the like.
Any combination of one or more computer readable media may be utilized as the above-described computer readable storage media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (erasable programmable read only memory, EPROM) or flash memory, an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for the present specification may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (local area network, LAN) or a wide area network (wide area network, WAN), or may be connected to an external computer (e.g., connected via the internet using an internet service provider).
Embodiments of the present application also provide a computer program product containing executable instructions that, when executed on a computer, cause the computer to perform some or all of the steps of the method embodiments described above.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In the description of embodiments of the present invention, a description of reference to the terms "one embodiment," "some embodiments," "examples," "particular examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present specification. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present specification, the meaning of "plurality" means at least two, for example, two, three, etc., unless explicitly defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present specification in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present specification.
Depending on the context, the word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to detection". Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided by the present invention, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely exemplary embodiments of the present invention, and any person skilled in the art may easily conceive of changes or substitutions within the technical scope of the present invention, which should be covered by the present invention. The protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (22)
1. A method for stream forwarding, applied to an electronic device having a display screen, comprising:
determining that the electronic equipment generates a target event;
determining that the electronic equipment is connected with a first device, wherein the first device has a voice broadcasting function;
determining a first interface of the electronic equipment as a reading interface, wherein the first interface of the electronic equipment is the last using interface before the electronic equipment generates a target event;
and receiving first confirmation information sent by the first device, and broadcasting information on the first interface through the first device in a voice way, wherein the first confirmation information is information for a user to confirm to broadcast through the first device in a voice way.
2. The method of claim 1, wherein the determining that the electronic device has a target event comprises:
and determining that the electronic equipment generates a screen-off event, generates a page jump event, enters a motion state, enters a driving state or enters a riding state.
3. The method of claim 1, wherein the determining that the electronic device has a target event comprises:
acquiring current time information, and acquiring user habit information corresponding to the current time information according to pre-stored time information and data of user habit information corresponding to the time information;
and if the user habit information indicates that the user enters a motion state, enters a driving state or enters a riding state, determining that the electronic equipment generates a target event.
4. The method of claim 2, wherein the determining that the electronic device has a screen-off event comprises:
when the electronic equipment is detected to be in a screen-off state, acquiring a first time length, wherein the first time length is the time length when a display screen is kept in a screen-on state before the electronic equipment is in the screen-off state;
and if the first time length is greater than a preset screen-lighting time length threshold value, determining that a screen-extinguishing event occurs to the electronic equipment.
5. The method of claim 1, wherein the determining that the electronic device is connected to a first apparatus comprises:
determining at least one of the following means for electrically or communicatively connecting the electronic device: earphone, car machine, intelligent audio amplifier, large-scale and intelligent wrist-watch.
6. The method of claim 5, wherein the determining the electronic device is electrically or communicatively connected to at least one of: earphone, car machine, intelligent audio amplifier, large-scale and intelligent wrist-watch, include:
detecting whether the electronic device is electrically or communicatively connected to a headset,
if yes, determining that the electronic equipment is connected with the first device,
if not, detecting whether the electronic equipment is electrically or communicatively connected with a car machine, an intelligent sound box, a large screen or an intelligent watch, and if so, determining that the electronic equipment is connected with a first device.
7. The method of claim 5, wherein the determining the electronic device is electrically or communicatively connected to at least one of: earphone, car machine, intelligent audio amplifier, large-scale and intelligent wrist-watch, include:
detecting whether the electronic device is electrically or communicatively connected to an earphone and receiving a photoplethysmography PPG signal,
if so, determining that the electronic equipment is connected with a first device, wherein the PPG signal is a PPG signal sent to the electronic equipment by the earphone when the earphone is worn by a user,
if not, detecting whether the electronic equipment is electrically or communicatively connected with a car machine, an intelligent sound box, a large screen or an intelligent watch, and if so, determining that the electronic equipment is connected with a first device.
8. The method of claim 7, wherein the method further comprises:
if the electronic equipment is detected to be electrically connected or in communication with an earphone and the PPG signal is not detected, a first instruction is sent to the earphone, and the first instruction is used for indicating the earphone to open a PPG function.
9. The method of claim 6, wherein the detecting whether the electronic device is electrically or communicatively connected to a car machine, a smart speaker, a large screen, or a smart watch when it is determined that the electronic device is in a driving state comprises:
detecting whether the electronic equipment is electrically connected or in communication with the car machine;
if yes, determining that the electronic equipment is connected with a first device;
if not, detecting that the electronic equipment is electrically or communicatively connected with the intelligent sound box, the large screen or the intelligent watch, and if yes, determining that the electronic equipment is connected with the first device.
10. The method of claim 1, wherein the determining that the first interface of the electronic device is a reading interface comprises:
determining a first interface of the electronic equipment as a first type reading interface, wherein the first type reading interface is an interface of a reading type APP; or determining the first interface of the electronic equipment as a second type reading interface, wherein the second type reading interface is an interface of a chat APP.
11. The method of claim 10, wherein the determining that the first interface is a first type of reading interface comprises:
acquiring first APP information, wherein the first APP information is information of an application program APP to which the first interface belongs;
when the first APP information is in a pre-stored first white list, determining that the first interface is a first type reading interface, wherein the first white list is an APP information list corresponding to the first type reading interface.
12. The method of claim 10 or 11, wherein said voice reporting of information on the first interface by the first device comprises:
acquiring a first continuous broadcasting position, wherein the first continuous broadcasting position is a position where a user interrupts reading in the first interface;
and starting voice broadcasting of the information on the first interface from the first continuous broadcasting position through the first device.
13. The method of claim 12, wherein the obtaining the first resume position comprises:
acquiring a first reading speed, wherein the first reading speed is the reading speed of a user;
acquiring a first residence time, wherein the first residence time is the residence time of a user on the first interface;
Calculating a first reading word number according to a first reading speed and a first residence time, wherein the first reading word number is the word number read by a user on the first interface;
and acquiring the target characters corresponding to the first reading character number, and determining the complete sentences corresponding to the target characters as a first continuous broadcasting position.
14. The method of claim 13, wherein the obtaining the first reading speed comprises:
acquiring a first time difference, wherein the first time difference is a time difference between a time stamp corresponding to a first last page turning event before a target event occurs to the electronic equipment and a time stamp corresponding to a second last page turning event before the target event occurs to the electronic equipment;
acquiring a first interface word number, wherein the first interface word number is the word number corresponding to a page reading interface opened by the electronic equipment after the penultimate page turning event;
and obtaining a quotient of the first interface word number and the first time difference to obtain a first reading speed.
15. The method of claim 13, wherein the obtaining the first reading speed comprises:
acquiring a second time difference, wherein the second time difference is a time difference value between a time stamp corresponding to a first last sliding screen event before a target event occurs in the electronic equipment and a time stamp corresponding to a second last sliding screen event before the target event occurs in the electronic equipment;
Acquiring a second interface word number, wherein the second interface word number is a word number corresponding to an interface position difference between a screen sliding starting position corresponding to a first screen sliding event before a target event occurs to the electronic equipment and a screen sliding starting position corresponding to a second screen sliding event before the target event occurs to the electronic equipment;
and obtaining a quotient of the second interface word number and the second time difference to obtain a first reading speed.
16. The method of claim 10, wherein the determining that the first interface is a second type of reading interface comprises:
acquiring second APP information, wherein the second APP information is information of an application program APP to which the first interface belongs;
and when the second APP information is in a pre-stored second white list, determining that the first interface is a second type reading interface, wherein the second white list is an APP information list corresponding to the second type reading interface.
17. The method of claim 16, wherein said voice broadcasting information on said first interface by said first device comprises:
acquiring first continuous broadcasting content which is latest information in the first interface;
And broadcasting the first continuous broadcasting content through the first device voice.
18. The method of claim 17, wherein the obtaining the first on-air content comprises:
monitoring a first message control, wherein the first message control is a control for displaying chat messages in the first interface;
and taking the latest information acquired in the first message control as first continuous broadcasting content.
19. The method according to claim 1, wherein the method further comprises:
and receiving first stop information, and stopping voice broadcasting of the information on the first interface, wherein the first stop information is information required by a user to stop voice broadcasting.
20. An electronic device comprising a memory for storing computer program instructions and a processor for executing the computer program instructions, wherein the computer program instructions, when executed by the processor, trigger the electronic device to perform the method of any one of claims 1-19.
21. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program, when run, controls a device in which the computer readable storage medium is located to perform the method of any one of claims 1-19.
22. A computer program product comprising executable instructions which, when executed on a computer, cause the computer to perform the method of any of claims 1-19.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210081741.1A CN116527807A (en) | 2022-01-24 | 2022-01-24 | Stream re-broadcasting method, electronic device, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210081741.1A CN116527807A (en) | 2022-01-24 | 2022-01-24 | Stream re-broadcasting method, electronic device, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116527807A true CN116527807A (en) | 2023-08-01 |
Family
ID=87399881
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210081741.1A Pending CN116527807A (en) | 2022-01-24 | 2022-01-24 | Stream re-broadcasting method, electronic device, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116527807A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731476A (en) * | 2015-03-21 | 2015-06-24 | 苏州乐聚一堂电子科技有限公司 | Handheld intelligent electronic equipment electronic information expressing method |
CN108646975A (en) * | 2015-12-03 | 2018-10-12 | 广州阿里巴巴文学信息技术有限公司 | Information processing method and device |
CN109584879A (en) * | 2018-11-23 | 2019-04-05 | 华为技术有限公司 | A kind of sound control method and electronic equipment |
CN110365836A (en) * | 2019-06-06 | 2019-10-22 | 华为技术有限公司 | A kind of reminding method of notice, terminal and system |
CN110618783A (en) * | 2019-09-12 | 2019-12-27 | 北京小米移动软件有限公司 | Text broadcasting method, device and medium |
-
2022
- 2022-01-24 CN CN202210081741.1A patent/CN116527807A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731476A (en) * | 2015-03-21 | 2015-06-24 | 苏州乐聚一堂电子科技有限公司 | Handheld intelligent electronic equipment electronic information expressing method |
CN108646975A (en) * | 2015-12-03 | 2018-10-12 | 广州阿里巴巴文学信息技术有限公司 | Information processing method and device |
CN109584879A (en) * | 2018-11-23 | 2019-04-05 | 华为技术有限公司 | A kind of sound control method and electronic equipment |
CN110365836A (en) * | 2019-06-06 | 2019-10-22 | 华为技术有限公司 | A kind of reminding method of notice, terminal and system |
CN110618783A (en) * | 2019-09-12 | 2019-12-27 | 北京小米移动软件有限公司 | Text broadcasting method, device and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115866121B (en) | Application interface interaction method, electronic device and computer readable storage medium | |
CN113905179B (en) | Method for switching cameras by terminal and terminal | |
CN111552451B (en) | Display control method and device, computer readable medium and terminal equipment | |
CN110032307A (en) | A kind of moving method and electronic equipment of application icon | |
CN118051111A (en) | High-energy-efficiency display processing method and equipment | |
CN113141483B (en) | Screen sharing method based on video call and mobile device | |
CN113448482B (en) | Sliding response control method and device of touch screen and electronic equipment | |
CN115129196A (en) | Application icon display method and terminal | |
CN113641271B (en) | Application window management method, terminal device and computer readable storage medium | |
CN114115512B (en) | Information display method, terminal device, and computer-readable storage medium | |
WO2021000817A1 (en) | Ambient sound processing method and related device | |
CN114915747B (en) | Video call method, electronic device and readable storage medium | |
CN114221402A (en) | Charging method and device of terminal equipment and terminal equipment | |
CN116048831B (en) | Target signal processing method and electronic equipment | |
CN115022807B (en) | Express information reminding method and electronic equipment | |
CN114095602B (en) | Index display method, electronic device and computer readable storage medium | |
CN113918003B (en) | Method, device and electronic device for detecting skin contact time of screen | |
CN116450259A (en) | Service abnormality reminding method, electronic equipment and storage medium | |
CN117093068A (en) | Vibration feedback method and system based on wearable device, wearable device and electronic device | |
CN117319369A (en) | File delivery method, electronic device and storage medium | |
CN116301483A (en) | Application card management method, electronic device and storage medium | |
CN116527807A (en) | Stream re-broadcasting method, electronic device, storage medium and program product | |
CN115695636B (en) | Intelligent voice interaction method and electronic equipment | |
CN116233599B (en) | Video mode recommendation method and electronic equipment | |
CN116719468B (en) | Interaction event processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |