CN111970401B - Call content processing method, electronic equipment and storage medium - Google Patents
Call content processing method, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN111970401B CN111970401B CN201910416825.4A CN201910416825A CN111970401B CN 111970401 B CN111970401 B CN 111970401B CN 201910416825 A CN201910416825 A CN 201910416825A CN 111970401 B CN111970401 B CN 111970401B
- Authority
- CN
- China
- Prior art keywords
- electronic device
- interface
- electronic equipment
- input
- call
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/725—Cordless telephones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M2250/00—Details of telephonic subscriber devices
- H04M2250/12—Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Telephone Function (AREA)
Abstract
The embodiment of the application provides a call content processing method and electronic equipment. The method comprises the following steps: when the electronic equipment is in a call connection state, the electronic equipment receives a first input; in response to the first input, the electronic equipment acquires at least one piece of key information of the call content; and the electronic equipment displays a first interface according to the key information, wherein the first interface comprises a label corresponding to the key information. The embodiment of the application can process the call content and provide at least one piece of key information related to the call content.
Description
Technical Field
The embodiment of the application relates to a communication technology, and in particular relates to a call content processing method and an electronic device.
Background
At present, with the development of science and technology, information is more and more important for the life of people, and more electronic devices provide information service for users and provide required information for the users.
As shown in fig. 1, taking a mobile phone as an example, an implementation scheme of an information service in the prior art is as follows:
as shown in fig. 1(a), the electronic device displays a first interface 101. The first interface 101 is a text message conversation interface, and may include text and images. The user can perform a multi-touch operation on an arbitrary area in the first interface 101.
As shown in fig. 1(b), in response to a multi-touch operation of a user on a certain area (e.g., a first area), the electronic device performs image processing on the area, extracts text in the area (e.g., says movie a good), and splits the text into a plurality of fields (e.g., "say, listen, and say", "movie a", and "good"). The electronic device then displays the second interface 102. The second interface 102 includes a plurality of tags 1021, and each tag corresponds to a field. The user may click on search key 1022 upon selecting one or more field tags (e.g., selecting the field tag of "movie a").
As shown in fig. 1(c), in response to the user clicking the search button 1022, the electronic device launches the browser application, displaying the interface 103. The interface 103 is an application interface of a browser. The electronic device then automatically fills in the search box with the field of the field tag (e.g., movie a) selected by the user and performs a search to obtain information related to the field.
The prior art identifies display content to provide information related to the display content, and users want to provide richer information services.
Disclosure of Invention
The embodiment of the application provides a call content processing method and electronic equipment, which can process call content to provide information related to the call content.
In a first aspect, a method for processing call content provided in an embodiment of the present application includes: when the electronic equipment is in a call connection state, the electronic equipment receives a first input; in response to the first input, the electronic equipment acquires at least one piece of key information of the call content; and the electronic equipment displays a first interface according to the key information, wherein the first interface comprises a label corresponding to the key information.
The call content may include first voice data, second voice data, first voice data and second voice data. The first voice data is voice data generated by the electronic equipment through collecting external sound, and the second voice data is voice data received by the electronic equipment from other electronic equipment in call connection with the electronic equipment. The key information may include text data of part of the call content, a keyword in the call content, and a web page link related to the keyword. Wherein, the label comprises any one or more of a text label, a keyword label and an information label. The text label corresponds to the text data; the keyword tag corresponds to the keyword; the information tag corresponds to a web page link. Wherein the first input may be that the electronic device is folded or that the electronic device is unfolded.
The method can obtain the key information in different modes. In one possible mode, the electronic equipment sends call content to a first server; the method comprises the steps that a first server receives call content sent by first electronic equipment and converts the call content into text data; then, the first server extracts a keyword from the text data and sends the keyword to the second server to acquire a webpage link related to the keyword; finally, the first server sends the webpage link, the text data or/and the key words to the electronic equipment.
According to the call content processing method provided by the embodiment of the application, the user can obtain the key information related to the call content while calling.
In one possible design approach, the electronic device receives a third input to the information tag, and in response to the third input, the electronic device displays a web page associated with the information tag. The electronic device may access the web page associated with the information tag through the browser kernel and display the web page in a web view (Webview). Therefore, the user can quickly access the webpage related to the call content through the information tag. Similar to browsing a web page in a browser application, a user may operate on the web page to access other web pages. The electronic device may receive a fourth input to the web page; the electronic device may display other web pages in response to the fourth input.
In one possible design approach, the electronic device may record web page links of web pages visited by the user. When the call is ended, the electronic equipment can jump to the webpage displayed when the call is ended through the recorded webpage link, so that a user can continue to browse the webpage through the browser application after the call is ended, and the user experience is improved.
In another possible design method, if the electronic device installs the relevant application, the electronic device may jump to the corresponding application interface when the call is ended. The design method specifically comprises the following steps: the electronic equipment acquires a latest recorded webpage link; determining an application related to the most recently recorded web page link according to the most recently recorded web page link; if the application is installed, the electronic equipment acquires an application link corresponding to the webpage link; starting the related application, and displaying a corresponding application interface according to the application link; and if the application is not installed, starting the browser application, and displaying the corresponding webpage according to the webpage link.
In a second aspect, embodiments of the present application provide an electronic device comprising a display screen, a processor, and a memory for storing a computer program, wherein the computer program comprises instructions that, when executed by the processor, cause the electronic device to perform the method according to any one of the first aspect.
In a third aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the method according to any of the first aspect.
In a fourth aspect, the present application provides a computer program product for causing an electronic device to perform the method according to any of the first aspect when the computer program product is run on the electronic device.
In a fifth aspect, the present application provides a graphical user interface, specifically comprising a graphical user interface displayed by an electronic device when performing the method according to any of the first aspect.
It is to be understood that the electronic device according to the second aspect, the computer storage medium according to the third aspect, the computer program product according to the fourth aspect, and the graphical user interface according to the fifth aspect are all configured to execute the corresponding method provided above, and therefore, the beneficial effects achieved by the electronic device according to the second aspect, the computer storage medium according to the third aspect, the computer program product according to the fourth aspect, and the graphical user interface according to the fifth aspect may refer to the beneficial effects of the corresponding method provided above, and are not repeated herein.
Drawings
Fig. 1 is a schematic view of a display content processing method provided in the prior art;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a block diagram of a software structure of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of call content processing according to an embodiment of the present application;
fig. 5 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 7 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 8 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 9 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 10 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 11 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 12 is a schematic view of a scene of another call content processing method according to an embodiment of the present application;
fig. 13 is a flowchart illustrating a further call content processing method according to an embodiment of the present application;
fig. 14 is a scene schematic diagram of another call content processing method according to an embodiment of the present application.
Detailed Description
It should be noted that, in the embodiment of the present application, descriptions of "first" and "second" are used to distinguish different messages, devices, modules, and the like, and do not represent a sequential order, nor limit that "first" and "second" are different types.
The "user" in the embodiment of the present application refers to a user who uses an electronic device unless otherwise specified.
The term "a and/or B" in the embodiment of the present application is only one kind of association relationship describing the associated object, and means that there may be three kinds of relationships, for example, there are three cases of a alone, a and B simultaneously, and B alone. In addition, the character "/" in the embodiment of the present application generally indicates that the preceding and following related objects are in an "or" relationship.
Some of the flows described in the embodiments of the present application include operations that occur in a particular order, but it should be clearly understood that these operations may be performed out of order or in parallel with the order in which they occur in the embodiments of the present application, and the order of the operations, such as 101, 102, etc., is merely used to distinguish between the various operations, and the order of the operations itself does not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel.
The call content processing method provided by the embodiment of the application can be applied to electronic equipment. Illustratively, the electronic device may be, for example: a Mobile phone, a Tablet Personal Computer (Tablet Personal Computer), a Laptop Computer (Laptop Computer), a digital camera, a Personal Digital Assistant (PDA), a navigation Device, a Mobile Internet Device (MID) or a Wearable Device (Wearable Device), and the like.
Fig. 2 shows a schematic structural diagram of the electronic device 100.
The electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a key 190, a motor 191, an indicator 192, a camera 193, a display screen 194, a Subscriber Identification Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display screen 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. The processor 110 and the display screen 194 communicate through the DSI interface to implement the display function of the electronic device 100.
The GPIO interface may be configured by software. The GPIO interface may be configured as a control signal and may also be configured as a data signal. In some embodiments, a GPIO interface may be used to connect the processor 110 with the camera 193, the display 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. The GPIO interface may also be configured as an I2C interface, an I2S interface, a UART interface, a MIPI interface, and the like.
The USB interface 130 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 130 may be used to connect a charger to charge the electronic device 100, and may also be used to transmit data between the electronic device 100 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices, such as AR devices and the like.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 140 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive charging input from a wired charger via the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100. The charging management module 140 may also supply power to the electronic device through the power management module 141 while charging the battery 142.
The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140, and supplies power to the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may be disposed in the same device.
The wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in the electronic device 100 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display screen 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display screen 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display screen 194 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 194 is used to display images, video, and the like. The display screen 194 includes a display panel. The display panel may adopt a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), and the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, with N being a positive integer greater than 1.
The electronic device 100 may implement a shooting function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into image signal in standard RGB, YUV and other formats. In some embodiments, the electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110. In other embodiments, the microphone 170C converts the collected sound signal into an electrical signal, which is received by the audio module 170 and converted into an audio signal. In other embodiments, the audio module may convert the audio signal into an electrical signal, which is received by the speaker 170A and converted into an audio signal for output.
The speaker 170A, also called a "horn", is used to convert the audio electrical signal into an acoustic signal. The electronic apparatus 100 can listen to music through the speaker 170A or listen to a handsfree call.
The receiver 170B, also called "earpiece", is used to convert the electrical audio signal into an acoustic signal. When the electronic apparatus 100 receives a call or voice information, it can receive voice by placing the receiver 170B close to the ear of the person.
The microphone 170C, also referred to as a "microphone," is used to convert sound signals into electrical signals. When making a call or transmitting voice information, the user can input a voice signal to the microphone 170C by speaking the user's mouth near the microphone 170C. The electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C to achieve a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further include three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, perform directional recording, and so on.
The headphone interface 170D is used to connect a wired headphone. The headset interface 170D may be the USB interface 130, or may be a 3.5mm open mobile electronic device platform (OMTP) standard interface, a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The pressure sensor 180A is used for sensing a pressure signal, and converting the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. Pressure sensor 180A
Such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc. The capacitive pressure sensor may be a sensor comprising at least two parallel plates having an electrically conductive material. When a force acts on the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display screen 194, the electronic apparatus 100 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic apparatus 100 may also calculate the touched position from the detection signal of the pressure sensor 180A. In some embodiments, the touch operations that are applied to the same touch position but different touch operation intensities may correspond to different operation instructions. For example: and when the touch operation with the touch operation intensity smaller than the first pressure threshold value acts on the short message application icon, executing an instruction for viewing the short message. And when the touch operation with the touch operation intensity larger than or equal to the first pressure threshold value acts on the short message application icon, executing an instruction of newly building the short message.
The gyro sensor 180B may be used to determine the motion attitude of the electronic device 100. In some embodiments, the angular velocity of electronic device 100 about three axes (i.e., the x, y, and z axes) may be determined by gyroscope sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects a shake angle of the electronic device 100, calculates a distance to be compensated for by the lens module according to the shake angle, and allows the lens to counteract the shake of the electronic device 100 through a reverse movement, thereby achieving anti-shake. The gyroscope sensor 180B may also be used for navigation, somatosensory gaming scenes.
The air pressure sensor 180C is used to measure air pressure. In some embodiments, electronic device 100 calculates altitude, aiding in positioning and navigation, from barometric pressure values measured by barometric pressure sensor 180C.
The magnetic sensor 180D includes a hall sensor. The electronic device 100 may detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 100 is a flip phone, the electronic device 100 may detect the opening and closing of the flip according to the magnetic sensor 180D. And then according to the opening and closing state of the leather sheath or the opening and closing state of the flip cover, the automatic unlocking of the flip cover is set.
The acceleration sensor 180E may detect the magnitude of acceleration of the electronic device 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the electronic device 100 is stationary. The method can also be used for recognizing the posture of the electronic equipment, and is applied to horizontal and vertical screen switching, pedometers and other applications.
A distance sensor 180F for measuring a distance. The electronic device 100 may measure the distance by infrared or laser. In some embodiments, taking a picture of a scene, electronic device 100 may utilize range sensor 180F to range for fast focus.
The proximity light sensor 180G may include, for example, a Light Emitting Diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 100 emits infrared light to the outside through the light emitting diode. The electronic device 100 detects infrared reflected light from a nearby object using a photodiode. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there are no objects near the electronic device 100. The electronic device 100 can utilize the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear for talking, so as to automatically turn off the screen to achieve the purpose of saving power. The proximity light sensor 180G may also be used in a holster mode, a pocket mode automatically unlocks and locks the screen.
The ambient light sensor 180L is used to sense the ambient light level. Electronic device 100 may adaptively adjust the brightness of display screen 194 based on the perceived ambient light level. The ambient light sensor 180L may also be used to automatically adjust the white balance when taking a picture. The ambient light sensor 180L may also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touches.
The fingerprint sensor 180H is used to collect a fingerprint. The electronic device 100 can utilize the collected fingerprint characteristics to unlock the fingerprint, access the application lock, photograph the fingerprint, answer an incoming call with the fingerprint, and so on.
The temperature sensor 180J is used to detect temperature. In some embodiments, electronic device 100 implements a temperature processing strategy using the temperature detected by temperature sensor 180J. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs a reduction in performance of a processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, the electronic device 100 heats the battery 142 when the temperature is below another threshold to avoid the low temperature causing the electronic device 100 to shut down abnormally. In other embodiments, when the temperature is lower than a further threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
The touch sensor 180K is also called a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may be disposed on a surface of the electronic device 100, different from the position of the display screen 194.
The bone conduction sensor 180M may acquire a vibration signal. In some embodiments, the bone conduction sensor 180M may acquire a vibration signal of the human vocal part vibrating the bone mass. The bone conduction sensor 180M may also contact the human pulse to receive the blood pressure pulsation signal. In some embodiments, the bone conduction sensor 180M may also be disposed in a headset, integrated into a bone conduction headset. The audio module 170 may analyze a voice signal based on the vibration signal of the bone mass vibrated by the sound part acquired by the bone conduction sensor 180M, so as to implement a voice function. The application processor can analyze heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, so as to realize the heart rate detection function.
The keys 190 include a power-on key, a volume key, and the like. The keys 190 may be mechanical keys. Or may be touch keys. The electronic apparatus 100 may receive a key input, and generate a key signal input related to user setting and function control of the electronic apparatus 100.
The motor 191 may generate a vibration cue. The motor 191 may be used for incoming call vibration cues, as well as for touch vibration feedback. For example, touch operations applied to different applications (e.g., photographing, audio playing, etc.) may correspond to different vibration feedback effects. The motor 191 may also respond to different vibration feedback effects for touch operations applied to different areas of the display screen 194. Different application scenes (such as time reminding, receiving information, alarm clock, game and the like) can also correspond to different vibration feedback effects. The touch vibration feedback effect may also support customization.
Indicator 192 may be an indicator light that may be used to indicate a state of charge, a change in charge, or a message, missed call, notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card can be brought into and out of contact with the electronic apparatus 100 by being inserted into the SIM card interface 195 or being pulled out of the SIM card interface 195. The electronic device 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support a Nano SIM card, a Micro SIM card, a SIM card, etc. The same SIM card interface 195 can be inserted with multiple cards at the same time. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The electronic device 100 interacts with the network through the SIM card to implement functions such as communication and data communication. In some embodiments, the electronic device 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
The software system of the electronic device 100 may employ a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture. The embodiment of the present application takes an Android system with a layered architecture as an example, and exemplarily illustrates a software structure of the electronic device 100.
Fig. 3 is a block diagram of a software structure of the electronic device 100 according to the embodiment of the present application.
The layered architecture divides the software into several layers, each layer having a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, an application layer, an application framework layer, an Android runtime (Android runtime) and system library, and a kernel layer from top to bottom.
The application layer may include a series of application packages.
As shown in fig. 3, the application package may include applications such as camera, gallery, calendar, phone call, map, navigation, WLAN, bluetooth, music, video, short message, etc.
The application framework layer provides an Application Programming Interface (API) and a programming framework for the application program of the application layer. The application framework layer includes a number of predefined functions.
As shown in FIG. 3, the application framework layers may include a window manager, content provider, view system, phone manager, resource manager, notification manager, and the like.
The window manager is used for managing window programs. The window manager can obtain the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make it accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
The view system includes visual controls such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
The phone manager is used to provide communication functions of the electronic device 100. Such as management of call status (including on, off, etc.).
The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and the like.
The notification manager enables the application to display notification information in the status bar, can be used to convey notification-type messages, can disappear automatically after a short dwell, and does not require user interaction. Such as a notification manager used to inform download completion, message alerts, etc. The notification manager may also be a notification that appears in the form of a chart or scroll bar text at the top status bar of the system, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, sounding a prompt tone, vibrating the electronic device, flashing an indicator light, etc.
The Android Runtime comprises a core library and a virtual machine. The Android runtime is responsible for scheduling and managing an Android system.
The core library comprises two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. And executing java files of the application program layer and the application program framework layer into a binary file by the virtual machine. The virtual machine is used for performing the functions of object life cycle management, stack management, thread management, safety and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface managers (surface managers), Media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., OpenGL ES), 2D graphics engines (e.g., SGL), and the like.
The surface manager is used to manage the display subsystem and provide fusion of 2D and 3D layers for multiple applications.
The media library supports a variety of commonly used audio, video format playback and recording, and still image files, among others. The media library may support a variety of audio-video encoding formats, such as MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, and the like.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The inner core layer at least comprises a display driver, a camera driver, an audio driver and a sensor driver.
The following describes exemplary workflow of the software and hardware of the electronic device 100 in connection with capturing a photo scene.
When the touch sensor 180K receives a touch operation, a corresponding hardware interrupt is issued to the kernel layer. The kernel layer processes the touch operation into an original input event (including touch coordinates, a time stamp of the touch operation, and other information). The raw input events are stored at the kernel layer. And the application program framework layer acquires the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and taking a control corresponding to the click operation as a control of a camera application icon as an example, the camera application calls an interface of an application framework layer, starts the camera application, further starts a camera drive by calling a kernel layer, and captures a still image or a video through the camera 193.
The methods provided by the embodiments shown in fig. 4 to 14 below are applied to the electronic devices provided by the foregoing embodiments.
Fig. 4 is a flowchart illustrating a call content processing method according to an embodiment of the present application. As shown in fig. 4, the method includes:
The electronic equipment is in a call connection state, which means that the electronic equipment and other electronic equipment have established call connection. The call may include: a voice call or a video call. Illustratively, as shown in fig. 5(a), a first user uses an electronic device (e.g., a first electronic device) to conduct a video call with another electronic device (e.g., a second electronic device) used by a second user. At this time, the electronic apparatus is in a call connection state with another electronic apparatus. The electronic device displays a conversation interface (e.g., interface 501). The telephony interface can include a telephony service initiation button (e.g., button 5011) for initiating a telephony service. The electronic device receives a first input from a user. The first input is: the talk service initiation button (e.g., button 5011) is touched.
It should be noted that the first input includes, but is not limited to, the above manner. For example, when the electronic device is connected to another device (e.g., a bluetooth headset, a smart watch), the first input may be: and the other equipment sends a signal for instructing the electronic equipment to start the call service. Illustratively, when the user presses the power key of the bluetooth headset for a long time, the bluetooth headset sends a signal to the electronic device instructing the electronic device to start a call service. For another example, when the electronic device is foldable, the first input may be: unfolding or folding the electronic device. Illustratively, as shown in fig. 6, the electronic device is a folder-type mobile phone. When the user deploys the electronic device, so that an included angle a formed between the first portion of the electronic device and the second portion of the electronic device is greater than a predetermined threshold, the electronic device starts a call service.
Wherein, the call service is to provide at least one key information required by the user by processing the call content. The telephony services include, but are not limited to: paraphrasing rarely used words, sentence supplement, voice error correction, grammar error correction, schedule management, translation, weather inquiry and navigation.
Illustratively, as shown in fig. 5(b), in response to the first input (button 5011 is touched), the electronic device initiates a call service.
Alternatively, as shown in fig. 5(b), when starting the call service or during the start of the call service, the electronic device may display a notification message 5021 for notifying that the electronic device is starting the call service.
Wherein, the step 403 may specifically include the steps 4031-4039:
Illustratively, the first data may include: the first voice data and/or the second voice data.
Exemplary, the application scenario shown in fig. 5(c) is: after the electronic equipment starts the call service, the second user asks for 'what good movies have recently'; the first user answers: "it is good to listen to movie a (the name of a certain movie).
The mobile communication module 150 of the electronic device (e.g., a first electronic device) receives second voice data transmitted by another device (e.g., a second electronic device), and the second voice data is converted into an electric signal by the audio module 170 and then converted into a sound signal by the speaker 170A to be output (e.g., "what good movie was recently seen"). The electronic device may send the second voice data to the first server.
The first user speaks near the microphone 170C, and the audio signal of the first user (e.g., "listen and speak movie a without error") is converted into an electrical signal by the microphone 170C and then converted into the first voice data by the audio module 170. That is, the first voice data is voice data generated by the electronic device by collecting external sound. The electronic equipment sends first voice data to the first server.
It is understood that the first voice data corresponds to content spoken by the first user; the second voice data corresponds to content spoken by the second user. That is, the electronic device may transmit the call content of the first user and the second user, that is, the call content between the first electronic device and the second electronic device, to the first server.
Step 4033, the first server generates second data according to the first data.
Illustratively, the generating of the second data from the first data may include the following steps 40331-40334:
step 40331, the first server converts the voice data in the first data into text data.
Illustratively, as shown in fig. 5(c), the electronic device sends the second voice data to the first server, and the first server converts the voice data into text data, resulting in the second text data being "what good-looking movie? ". Similarly, the electronic device sends the first voice data to the first server, and the first server converts the voice data into text data, and the obtained first text data is: "listen and talk movie A without error! ".
40332, the first server extracts the keywords in the text data.
Exemplarily, as shown in fig. 5(c), when the text data is: "what a good-looking movie is there? "the keywords extracted by the first server may be: "movie". When the text data is: "listen and talk movie A without error! "the keywords extracted by the first server may be: "movie a".
The embodiment of the present application does not limit the method for recognizing speech and extracting keywords. The skilled person can extract keywords using methods known in the art.
40333 the first server determines a second server.
Wherein the second server is used for providing key information related to the extracted keywords.
For example, the first server may determine the second server according to the extracted keyword. For example, the first server may store one or more lists. As shown in table 1, the list includes one or more keywords and names of one or more servers corresponding to the keywords. And the first server matches the extracted keywords with the keywords in the list, and determines that the second server is the server corresponding to the keywords in the list matched with the extracted keywords. The second server may be one server or a plurality of servers.
For example, as shown in fig. 5(c), when the keyword extracted by the first server is "movie a (a movie name)", the first server determines that the second server is "Fandango".
It should be noted that, when a plurality of servers correspond to the keywords in the list, the first server may determine the second server according to the setting of the user or the usage habit of the user.
For example, assume that the keywords extracted by the first server are "buy", "cell phone", and "hua is P30 Pro". As can be seen from the server list shown in table 1, the servers corresponding to the keywords are "amazon" and "naoba". Assuming that the user uses the "Taobao" application more frequently than the "Amazon" application, the first server may determine that the second server is "Taobao".
Optionally, the server list further includes a semantic type. The first server may determine a server corresponding to the voice type according to the semantic type.
TABLE 1
Serial number | Semantic types | | Server name | |
1 | Shopping category | Trade names of "buy", "price | Taobao and |
|
2 | Video class | Video name | Youtube, |
|
3 | Literature classes | Poetry, idiom and | Huawei | |
4 | Traffic class | Geographical name | Google map and |
|
5 | Tickets category | Order, appointment and movie name | Fandango | |
6 | News class | "News" and "reports" | BBC news | |
7 | Travel products | Shop name, "vacation" | Booking |
40334 the first server generates second data.
The second data may include keywords. The second data may be data in JavaScript Object Notation (JSON) format. JSON is a built-in language feature of JavaScript and provides a simple method for describing complex objects. The form of the second data may be different according to the second server.
For example, assuming that the keywords extracted by the first server are "buy", "cell phone" and "hua is P30 Pro", the second data may be: information: { "type": "Mobile phone"; name: "Huawei P30 Pro".
Alternatively, as shown in fig. 5(c), when the keywords are "movie" and "movie a", the second data may be: information: { name: "movie A" }.
Step 4034, the first server sends the first processing request to the second server. Wherein the first processing request includes the second data. The first processing request is used for instructing a second server to provide information related to the second data.
Step 4036, in response to the first processing request, the second server sends the third data to the first server.
The third data may include a link, such as a Uniform Resource Locator (URL) or the like. Information related to the keyword may be acquired by the linked electronic device. The information may include: commodity information, drama introduction, movie review, poetry full text, a map, singer introduction, news information, hotel ranking list and the like.
For example, it is assumed that the second data included in the first processing request received by the second server is: information: { "type": "Mobile phone"; name: "Huawei P30 Pro", the third data may include: the links of the webpage of the introduction page of the mobile phone named P30 Pro from Taobao network, such as { "web": www.taobao.com/phos/huawei/p 30-pro/}. That is, when the electronic device accesses the web page through the web page link, information related to Huawei P30 Pro may be acquired.
Illustratively, as shown in fig. 5(c), the second data included in the first processing request received by the second server is: information: { name: "movie a" }, the third data may include: the content of Movie A on the Fandango webpage introduces the webpage link of the page, such as { "web": www.fandango.com/movie-a/movie-overview }. That is, when the electronic device accesses the web page through the web page link, information related to Movie a may be acquired.
Step 4037, the first server receives the third data sent by the second server.
Step 4038, the first server sends the fourth data to the electronic device.
Wherein the fourth data may include: keywords, text data and/or links. Optionally, the fourth data may further include: name of the second server.
Illustratively, the fourth data may be:
In summary, the electronic device may obtain at least one piece of key information of the content of the call between the first electronic device and the second electronic device. It is understood that, at this time, at least one key message provided by the session service is the fourth data.
And step 404, the electronic equipment displays a call service interface according to the at least one piece of key information.
Illustratively, as shown in FIG. 5(c), the electronic device displays an interface 503. The interfaces 503 include a call interface (e.g., interface 5036) and a call service interface (e.g., interface 5037). The electronic device displays a call interface in the display area 5031 and displays a call service interface in the display area 5032. The electronic device may generate a text label, a keyword label, and/or an information label based on at least one key information provided by the call service. The call service interface can include a text label (e.g., label 5033 "what is a good movie recently. Wherein the text label is used for displaying text data. The keyword tag is used to display keywords. The informational tag is associated with a link. The information tag may be located near a text tag or a keyword tag associated therewith.
Assume that the information tag is associated with a web page link. The electronic device may receive a third input to the information tag from the user. The third input may be: the user touches the information tag. In response to a third input to the information tag, the electronic device may access, through the browser kernel, a web page associated with the information tag. The electronic device may display the web page (e.g., introduction page of movie a, order page of movie a) in a web view (Webview).
Alternatively, in response to a third input to the information tag, the electronic device may launch a browser application to access a web page associated with the information tag. The electronic device may display an application interface of the browser application in a second display area.
Although not shown in the figures, the call service interface may also include a website tag. The website tag is used for representing a server corresponding to the webpage link, such as Taobao, Amazon and the like. The application tag or website tag may be located in proximity to the information tag associated therewith.
Optionally, the electronic device may display different types of labels by differentiating the labels according to different colors, shapes, sizes, and the like.
In step 405, the electronic device determines whether a second input is received.
If the electronic device receives the second input, the electronic device performs step 407; if the electronic device does not receive the second input by the first user, the electronic device performs step 406.
Wherein the second input is used for indicating the electronic equipment to end the call service. The electronic device responds to the second input and ends the call service. For example, the second input may be: the user presses the power key for a long time.
It should be noted that the second input includes, but is not limited to, the above manner. For example, the second input may be: unfolding or folding the electronic device. Illustratively, as shown in fig. 6, the electronic device is a folding mobile phone, and when an angle α formed between the first portion of the electronic device and the second portion of the electronic device is smaller than a predetermined threshold, the electronic device ends the call service. As another example, when the electronic device is connected to another device (e.g., a bluetooth headset, a smart watch), the second input may be: and the other equipment sends a signal for instructing the electronic equipment to close the call service.
In step 406, the electronic device determines whether the call is ended.
If the call is over, go to step 407; if the electronic device is still in the call state, step 403 is executed.
As shown in fig. 5(d), when the first user or the second user hangs up, the first electronic device ends the call connection with the second electronic device. And the first electronic equipment judges that the call is ended and ends the call service. The electronic device displays an interface 504.
It can be understood that, after the electronic device finishes the call service, the electronic device finishes the acquisition of the key information.
In summary, according to the call content processing method provided by the embodiment of the present application, the server processes the call content in real time, and at least one piece of key information related to the call content can be provided. The first user can obtain key information related to the call content while talking with the second user. If necessary, the user can also quickly access the webpage related to the call content by touching the information tag.
Alternatively, as shown in fig. 7(a), when the first user has a voice call with the second user, the first display region 5031 for displaying the call interface may be smaller and the second display region 5032 for displaying the call service interface may be larger. Alternatively, as shown in fig. 7(b), the call interface may be displayed as an icon (e.g., icon 701) in a floating manner.
It is understood that as the electronic device repeatedly performs steps 403 and 404 as the call progresses, the electronic device generates one tag after another. Illustratively, a first user is engaged in a video call with a second user. The electronic device repeatedly executes step 403 and step 404 to generate labels a-1, B-1, a-2, B-3, a-3, B-4 and B-5 in sequence. Wherein A-1, A-2 and A-3 are related to the content spoken by the first user. B-1, B-2, B-3, B-4 and B-5 are related to what the second user said.
Alternatively, the electronic device may arrange the plurality of tags in a chronological order. Illustratively, as shown in FIG. 8(a), the new tag (e.g., tag A-3) is located below the old tag (e.g., tag B-3).
Alternatively, the electronic device may arrange the tags according to the user. Illustratively, as shown in FIG. 8(a), the tags (e.g., tags B-1, B-2, and B-3) associated with the second user's dialog are arranged on the left side; the tags associated with the dialog of the first user (e.g., tags a-1, a-2, a-3) are arranged on the right side.
Alternatively, the electronic device may scroll through the displayed labels. Illustratively, as shown in fig. 8(a) through 8(b), the old tab disappears from above and the new tab appears from below so that the user can view the tabs associated with the current conversation.
The electronic device may set the scrolling speed of the tag according to the speech rate of the call, or the scrolling speed of the tag may be specified by the user. Alternatively, the electronic device may stop scrolling the one or more labels associated therewith when the finger is hovering proximate to the one label. Alternatively, the electronic device may stop scrolling when the finger hover approaches the second display region. When the finger is removed, the electronic device quickly scrolls to display the tags associated with the current conversation.
Alternatively, the electronic device may determine the arrangement direction and the scrolling direction of the tags according to the number of users participating in the call.
Different from fig. 8(a) to 8(b), when only two users (such as a first user and a second user) participate in a call, the electronic device arranges labels of different users left and right, and displays the labels by scrolling up and down; as shown in fig. 8(c) and (d), when three or more users (e.g., a first user, a second user, and a third user) participate in a call, the electronic device may arrange the labels of the different users up and down, and scroll the displayed labels left and right.
Specifically, the electronic device sequentially generates tags E-1, C-1, D-1, E-2, and C-2. Wherein C-1 and C-2 are associated with the content spoken by the first user; d-1 is associated with the content spoken by the second user; e-1 and E-2 are related to what the third user says.
As shown in fig. 8(c) and 8(d), the electronic device may display the first user's tab in the upper third area, the second user's tab in the middle third area, and the third user's tab in the lower third area. The new label is located to the right of the old label. The old label disappears from the left side and the new label appears from the right side. Of course, the new label may also be located to the left of the old label; the old label disappears from the right and the new label appears from the left.
It is understood that, in the embodiment of the present application, the style of the tag and the display of the tag include, but are not limited to, the manner described in the above embodiments. For example, as shown in fig. 9, the pattern of the label may be a circular bubble.
The electronic device can determine a display location of the tag. Illustratively, as shown in fig. 9, when "movie a" is mentioned several times in the present call, the electronic device may display a label related to movie a in the middle. As another example, the electronic device can determine the size of the tag. Illustratively, as shown in fig. 9, when the frequency of referring to "movie a" in the current call is higher than the frequency of referring to "movie B", the size of the tag related to movie a determined by the electronic device is larger than the size of the tag related to movie B.
It will be appreciated that the electronic device may also determine display attributes such as the shape, transparency or color of the label.
It should be noted that, in the above embodiment, the key information of the call content is described by taking the keyword, the text data and the web page link as examples, and it is understood that, as shown in fig. 10, the key information includes but is not limited to this.
Fig. 10 is a scene schematic diagram of a call content processing method according to an embodiment of the present application.
As shown in fig. 10(a), when the system language of the electronic device is a first language (e.g., chinese) and the call language is a second language (e.g., english), the first server generates second data (e.g., text data: "at's up. The first server may send a first processing request to the second server, the first processing request including the second data. The first processing request may be for instructing a second server to provide a first language translation of the second data (e.g., a chinese translation of "at's up. In response to the first processing request, the second server sends third data (e.g., text data: "how recently. The first server sends fourth data to the electronic device, wherein the fourth data comprises third data and second data. And generating a call service interface by the electronic equipment according to the fourth data. (e.g., display text data "What's up" and "how recently.
It should be noted that the call content processing method in the embodiment of the present application includes, but is not limited to, providing data related to the second data by another server different from the first server.
For example, the application scenario shown in fig. 10(b) is: jack says that: "a flash of a brilliant crystal (Twinkle, little star)". The first server may provide a sentence supplementation service. In particular, the first server may comprise a thesaurus. The first server matches the second data (text data: "stars" on a wandering day) with the lyrics in the lyrics library to obtain data related to the second data. For example, the data related to the second data may be: the lyrics subsequent to the lyrics to which the text data relates, for example, "the sky is a little star (How I finder what you are)". It is to be understood that the first server may include a poetry library and/or a nameplate library, etc. to provide various sentence supplement services.
As another example, the application scenario shown in fig. 10(c) is: jack says that: "my specialty is the mechanical specialty", but Jack wrongly reads "mechanical" as "jie". The first server may provide a voice error correction service. The "mechanical" phonetic symbol (xie) is sent to the electronic device.
In addition, the call content processing method provided in the embodiment of the present application may also be implemented by providing, by the electronic device, data related to the second data.
The application scenario shown in fig. 10(d) is: tom question "No. 4 month 3 afternoon is free". The first server generates second data (e.g., text data: "is there available in afternoon No. 4 month 3. And then, the first server sends the second data to the electronic equipment. The internal memory 121 of the electronic device includes the user's schedule. The electronic device obtains and displays data related to the second data (e.g., Jack's schedule of 4 months and 3 afternoons) according to the second data.
It should be noted that, in the foregoing embodiment, the example is described by taking the first data sent by the electronic device to the first server as the voice data, and the first server processes the voice data to generate the second data. Alternatively, in order to protect the privacy of the user, the call content processing method in the embodiment of the present application may perform, by the electronic device, conversion from voice data to text data (step 40331) and extraction of keywords (step 40332). And sending the keyword as first data to a first server.
It should be noted that, in the above embodiment, the electronic device sends the voice data of the user and the voice data of the other users to the first server. Alternatively, the electronic device (e.g., a first electronic device) may send the voice data of the first user to the first server, and the electronic devices of other users (e.g., a second electronic device) may send the voice data of the second user to the first server.
Optionally, when the electronic device accesses the web pages through the browser kernel, the electronic device may record web page links of the respective web pages accessed by the user. When the call service is finished, the electronic equipment can start the browser application and jump to the webpage displayed when the call service is finished according to the recorded webpage link.
For convenience of understanding, the above method is specifically described below with reference to specific application scenarios. Fig. 11 is a scene schematic diagram of a call content processing method according to an embodiment of the present application.
As shown in fig. 11(a), the user touches the first information tag (e.g., information tag "ticket order"). The first information tag is associated with a first web page (e.g., a theater selection page under movie a's booking page). As shown in fig. 11(b), in response to a third input to the first information tag, the electronic device accesses the first web page through the browser kernel, and the electronic device displays the first web page in the web view. The electronic device records a web page link (www.fandango.com/ticket < vie >) of the first web page. It will be appreciated that the first web page may include one or more web page tags, similar to browsing web pages in a browser application, which the user may manipulate to access more web pages. For example, the first web page includes a first web page tag (e.g., web page tag "ABC movie theater") that is associated with a second web page (e.g., time selection page for ABC movie theater under booking page of movie a). As shown in fig. 11(b) to 11(c), the electronic device receives a fourth input of the first webpage from the user. The fourth input may be: and touching a first webpage label in the first webpage. And responding to the touch operation of the first webpage label in the first webpage, and the electronic equipment accesses the second webpage through the browser kernel and displays the second webpage in a webpage view. The electronic device records a web page link of the second web page (www.fandango.com/ticket < ovie > & museum & ABC). It will be appreciated that the second web page may also include one or more web page tags, which the user may continue to manipulate to access web pages associated with the one or more web page tags in the second web page. Then, similarly, the electronic device can record a web page link for the web page. As shown in table 2 below, the electronic device may store one or more historical access lists to record web page links for one or more web pages visited by the user. As shown in fig. 11(d) and 11(f), when the call is ended, the electronic device starts a browser application, and displays the second web page according to the newly recorded web page link of the second web page. Alternatively, as shown in fig. 11(e), when the call is ended, the electronic device may display a notification message to prompt the user that the electronic device is closing the call service, asking the user whether to continue accessing. When the user selects to continue the access, the electronic device may launch the browser application and jump to the web page displayed at the end of the call. The electronic device may not perform the above steps when the user chooses not to continue the access.
TABLE 2
Serial number | |
1 | www.fandango.com/movie-a/movie-overview |
2 | www.fandango.com/ticket?movie="movieA" |
3 | www.fandango.com/ticket?movie="movieA"&museum="ABC" |
To sum up, compared with the application scenarios shown in fig. 12(a) to 12(f), after the call is ended, the user needs to open the browser application and find a corresponding web page for booking tickets, as another call content processing method provided in the embodiment of fig. 11(a) to 11, by recording the web page link, starting the browser application after the call is ended, and jumping to the web page displayed when the call is ended, the user can browse the web page consecutively after the call is ended, thereby improving the user experience.
Electronic devices have a wide variety of applications installed therein. The applications may include a fast application, an applet, and so on. For example, when a user installs a panning application, the user may be more inclined to view related information using the panning application than to access a panning web page with a browser application. Thus, optionally, the electronic device launches the application and jumps to the application interface corresponding to the web page that the user last visited.
Specifically, as shown in fig. 13, after step 407 shown in fig. 4 (i.e., after the call service is ended), when the user selects to continue accessing the website (or without the user selection, the electronic device may automatically jump to continue accessing the website), the method may further include:
As shown in fig. 11, in response to a third input of the first information tag by the user, the electronic device accesses the first webpage through the browser kernel, and the user may operate the webpage tag in the webpage to access other webpages. The latest recorded webpage link refers to the webpage link last visited by the user before the call service is ended.
It will be appreciated that the electronic device may determine the application associated therewith based on the web page link. For example, the website address iswww.taobao.com/phones/huawei/p30-pro/Then, the application associated with the web page link may be a panning application.
If the electronic device installs an application corresponding to the web page, then go to step 1304; if the electronic equipment does not install the application, executing step 1303;
and step 1303, the electronic equipment starts a browser application and jumps to a corresponding webpage according to the webpage link.
Step 1308, in response to the second processing request, the second server sends the application link to the first server.
Step 1311, the electronic device receives the application link sent by the first server.
Step 1312, the electronic device starts the application, and jumps to a corresponding application interface according to the application link.
In summary, when the electronic device is installed with the corresponding application, the electronic device may acquire the application link corresponding to the web page link, and jump to the application interface corresponding to the web page last visited by the user, so that the user may view the relevant information through the application.
Alternatively, the electronic device may send the second processing request directly to the second server without going through the first server, and the second server may send the application link directly to the electronic device. Alternatively, the electronic device may directly send the web page link to the first server without passing through the second server, and the first server may directly send the application link to the electronic device. Illustratively, the first server stores a comparison table, and the comparison table comprises web page links and link addresses of application interfaces corresponding to the web page links, namely application links. And the first server determines an application link corresponding to the webpage link according to the comparison table and sends the application link to the electronic equipment.
Alternatively, the electronic device may determine the corresponding application link according to the web page link. Illustratively, the links in the third data include web page links of the web page and application links of the application interface corresponding to the web page links. The electronic device may determine an application link corresponding to the most recently recorded web page link from the most recently recorded web page link, the web page link included in the third data, and the application link. For example, the third data is: { "web": www.amazon.com/phos/huawei/p 30-pro/; an "app": "amazon:// phos/huawei/p 30-pro/" }. The electronic device compares the most recently recorded web page link (e.g., www.amazon.com/phones/huawei/p30-pro/spec /) with the web page link (www.amazon.com/phones/huawei/p30-pro /) in the third data, and adds the added field (spec /) as a suffix to the application link (amazon:// phones/huawei/p30-pro /) in the third data, thereby obtaining the application link (amazon:// phones/huawei/p30-pro/spec /) corresponding to the most recently recorded web page link. It should be noted that, when the above method is adopted, the web page link and the application link need to maintain the same or similar suffix form.
It should be noted that, if the second display area includes multiple web interfaces when the call is ended, the electronic device may start multiple corresponding applications and jump to the corresponding application interfaces. Or the electronic device may launch a browser to display a corresponding plurality of web page interfaces through the plurality of windows. Or the electronic device may also start the corresponding application and the browser to display the corresponding interfaces respectively. For ease of understanding, the following is exemplarily described in connection with the application scenario shown in fig. 14.
As shown in fig. 14(a), the electronic device displays a call service interface 1410. The user touches the first information tag (information tag "ticket order") associated with the first web page (movie theater selection page under ticket order page of movie a). As shown in fig. 14(b), in response to a touch operation of the user on the first information tag, the electronic device accesses the first webpage through the browser kernel, and the electronic device displays a webpage interface 1411 of the first webpage. As shown in fig. 14(c), the electronic device receives a fourth input. The fourth input may be: and touching a first webpage label in the first webpage. In response to a touch operation on a first webpage tag (webpage tag "ABC cinema") in the first webpage, the electronic device accesses a second webpage (time selection page under the booking page of movie a) through the browser kernel, and the electronic device displays a webpage interface 1412 of the second webpage.
When the electronic device displays a web page interface, the electronic device may also display a call service home button 1401, a return button 1402, and/or a delete button 1403. The return button is used for indicating the electronic equipment to display a webpage accessed by a user. For example, when the electronic device displays the second webpage 1412, the user touches the return button 1402, and in response to the touch operation of the return button, the electronic device displays a webpage interface 1411 accessed by the user. And the deleting button is used for indicating the electronic equipment to finish the display of the webpage interface and display a call service interface. When the user touches the delete button 1403, the electronic device ends the display of the web interface, and displays a call service interface in the second display area. The session service home button 1401 is used to instruct the electronic device to display a session service interface. That is, when the user touches the call service home button 1401, the electronic device displays the call service interface 1410 in the second display area together with the web interface 1412 as shown in fig. 14 (d). As shown in fig. 14(d) to 14(e), in response to a user operation on a third information tab ("purchase" tab), the electronic device displays a web page interface 1413 of a fourth web page associated with the third information tab in the second display area. If the second display area comprises a plurality of web interfaces when the call is ended, the electronic device can start the corresponding application or browser application and jump to the corresponding application interface or web interface according to the corresponding application link or web link. As shown in fig. 14(f), if the second display area includes a second web page 1412 (time selection page under the booking page of movie a) and a fourth web page 1413 (purchase page of Mate20 Pro) at the end of the call, assuming that the electronic device is installed with the first application (Fandango) corresponding to the second web page and is not installed with the application program (Amazon) corresponding to the fourth web page, the electronic device starts the first application to jump to an application interface 1414 corresponding to the second web page and starts the browser to jump to a web page 1415 corresponding to the fourth web page.
The embodiment of the application discloses electronic equipment, includes: a display screen; a processor; a memory; one or more sensors; an application program; a computer program and a communication module. The above-described devices may be connected by one or more communication buses. Wherein the one or more computer programs are stored in the memory and configured to be executed by the one or more processors, the one or more computer programs including instructions which may be used to perform the steps of the embodiments described above.
For example, the processor may specifically be the processor 110 shown in fig. 1, the memory may specifically be the internal memory and/or the external memory 120 shown in fig. 1, the display screen may specifically be the display screen 194 shown in fig. 1, the sensor may specifically be one or more sensors in the sensor module 180 shown in fig. 1, and the communication module may include the mobile communication module 150 and/or the wireless communication module 160, which is not limited in this embodiment of the present application.
In addition, the embodiment of the application also provides a Graphical User Interface (GUI) on the electronic device, and the GUI specifically includes a GUI displayed by the electronic device when the electronic device executes the method.
In the above embodiments, all or part of the implementation may be realized by software, hardware, firmware or any combination thereof. When implemented using a software program, may take the form of a computer program product, either entirely or partially. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The above description is only a specific implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any changes or substitutions within the technical scope disclosed in the embodiments of the present application should be covered by the scope of the embodiments of the present application. Therefore, the protection scope of the embodiments of the present application shall be subject to the protection scope of the claims.
Claims (25)
1. A method for processing call content is characterized by comprising the following steps:
when first electronic equipment is in a call connection state with second electronic equipment, the first electronic equipment receives first input; the first input is that the first electronic device is folded; or the first input is that the first electronic device is deployed;
in response to the first input, the first electronic equipment acquires at least one piece of key information of conversation content between the first electronic equipment and the second electronic equipment; the key information comprises webpage links related to keywords in the call content;
the first electronic equipment displays a first interface according to the at least one piece of key information, wherein the first interface comprises an information tag corresponding to the key information, and the information tag corresponds to the webpage link;
the first electronic device receiving a third input to the information tag; in response to the third input to the information tag, the first electronic device displays a web page associated with the information tag;
the first electronic device ends the call connection with the second electronic device; in response to the end of the call connection, the first electronic device ends the acquisition of the key information;
the first electronic equipment acquires a latest recorded webpage link of the first electronic equipment, wherein the latest recorded webpage link refers to a webpage link last visited by a user before the call service is ended;
the first electronic equipment starts a browser application and displays a webpage corresponding to the latest recorded webpage link according to the latest recorded webpage link; or
The first electronic device determines an application related to the latest recorded web link according to the latest recorded web link, acquires an application link corresponding to the latest recorded web link, starts the application, and displays an application interface corresponding to the application link according to the application link.
2. The method of claim 1, wherein the key information further comprises any one or both of:
text data corresponding to at least a part of the call content;
keywords in the call content.
3. The method of claim 2, wherein the tags further comprise either or both of text tags and keyword tags;
the text label corresponds to the text data;
the keyword tag corresponds to the keyword.
4. The method of claim 1, wherein the obtaining, by the first electronic device, at least one piece of key information of the content of the call between the first electronic device and the second electronic device comprises:
the first electronic equipment sends the call content to a first server;
the first electronic equipment receives the at least one piece of key information sent by the first server.
5. The method of claim 1, wherein the first electronic device displays a web page associated with the information tag, comprising:
the first electronic equipment accesses the webpage associated with the information tag through a browser kernel;
the first electronic device displays the web page associated with the information tag in a web view.
6. The method of claim 5, wherein after the first electronic device displays the webpage associated with the information tag, further comprising:
the first electronic device receiving a fourth input to the webpage associated with the information tag;
in response to the fourth input, the first electronic device displays other web pages.
7. The method of claim 6, wherein after receiving the fourth input to the webpage associated with the information tag, the first electronic device further comprises:
the first electronic equipment records webpage links of other webpages.
8. The method of any of claims 1-7, wherein the first electronic device ends the obtaining of the key information, including
The first electronic device receiving a second input; in response to the second input, the first electronic device ends the obtaining of the key information.
9. The method of any of claims 1-7, wherein prior to the first electronic device receiving the first input, further comprising:
the first electronic equipment displays a second interface, and the second interface is a call interface.
10. The method of claim 9,
when the first electronic equipment displays the first interface, the second interface is zoomed out and is displayed simultaneously with the first interface; or
And when the first electronic equipment displays the first interface, the second interface is reduced to be displayed in a suspended mode.
11. The method according to any one of claims 1 to 7,
the call content comprises one or both of first voice data and second voice data;
the first voice data is generated by the first electronic equipment through collecting external sound;
the second voice data is voice data received by the first electronic device from the second electronic device.
12. The method of claim 1,
the first electronic device being folded comprises a first portion of the first electronic device being folded to form an angle with a second portion of the first electronic device that is less than a first angle; or
The first electronic device is unfolded until an included angle between a first part of the first electronic device and a second part of the electronic device is larger than a second angle.
13. An electronic device for processing call content, comprising:
a display screen;
a processor;
a memory for storing a computer program;
the computer program comprises instructions which, when executed by the processor, cause the electronic device to perform the steps of:
receiving a first input when the electronic equipment is in a call connection state with other electronic equipment; the first input is that the electronic device is folded; or the first input is that the electronic device is deployed;
responding to the first input, and acquiring at least one piece of key information of call content between the electronic equipment and the other electronic equipment, wherein the key information comprises a webpage link related to a keyword in the call content;
displaying a first interface on the display screen according to the at least one piece of key information, wherein the first interface comprises an information tag corresponding to the key information, and the information tag corresponds to the webpage link;
receiving a third input to the information tag;
in response to the third input to the information tag, displaying a web page associated with the information tag on the display screen;
ending the call connection with other electronic devices;
responding to the end of the call connection, and ending the acquisition of the key information;
acquiring a latest recorded webpage link of the electronic equipment, wherein the latest recorded webpage link refers to a webpage link last visited by a user before the call service is ended;
starting a browser application, and displaying a webpage corresponding to the latest recorded webpage link on the display screen according to the latest recorded webpage link; or
Determining an application related to the latest recorded web page link according to the latest recorded web page link; acquiring an application link corresponding to the newly recorded webpage link; and starting the application, and displaying an application interface corresponding to the application link on the display screen according to the application link.
14. The electronic device of claim 13, wherein the key information comprises any one or both of:
text data corresponding to at least a part of the call content;
keywords in the call content.
15. The electronic device of claim 14, wherein the tags include either or both of text tags and keyword tags;
the text label corresponds to the text data;
the keyword tag corresponds to the keyword.
16. The electronic device according to claim 13, wherein the obtaining at least one piece of key information of the content of the call between the electronic device and the other electronic device comprises:
sending the call content to a first server;
and receiving the at least one piece of key information sent by the first server.
17. The electronic device of claim 13, wherein the displaying the web page associated with the information tag comprises:
the electronic equipment accesses the webpage associated with the information tag through a browser kernel;
the electronic device displays the web page associated with the information tag in a web view.
18. The electronic device of claim 17, wherein the instructions, when executed by the processor, cause the electronic device, after performing the displaying the web page associated with the information tag, to further perform the steps of:
receiving a fourth input to the web page associated with the information tag;
displaying other web pages on the display screen in response to the fourth input.
19. The electronic device of claim 18, wherein the instructions, when executed by the processor, cause the electronic device, after performing the receiving of the fourth input to the web page associated with the information tag, further perform the steps of:
web page links to other web pages are recorded.
20. The electronic device according to any of claims 13-19, wherein the instructions, when executed by the processor, cause the electronic device, when performing the ending of the obtaining of the key information, to perform the specific steps of:
receiving a second input;
and responding to the second input, and finishing the acquisition of the key information.
21. The electronic device of any of claims 13-19, wherein the instructions, when executed by the processor, cause the electronic device, prior to performing the receiving the first input, further to perform the steps of:
and displaying a second interface on the display screen, wherein the second interface is a call interface.
22. The electronic device of claim 21,
when the first interface is displayed, the second interface is zoomed out and is displayed simultaneously with the first interface; or
And when the first interface is displayed, the second interface is reduced to be an icon in a suspension display mode.
23. The electronic device of any of claims 13-19,
the call content comprises one or both of first voice data and second voice data;
the first voice data is generated by the electronic equipment through collecting external sound;
the second voice data is voice data received by the electronic equipment from the other electronic equipment.
24. The electronic device of claim 13,
the electronic equipment is folded, wherein the included angle between a first part of the electronic equipment and a second part of the first electronic equipment is smaller than a first angle;
the electronic device is unfolded until the included angle between the first part of the electronic device and the second part of the electronic device is larger than a second angle.
25. A computer-readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform a call content processing method according to any one of claims 1-12.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910416825.4A CN111970401B (en) | 2019-05-20 | 2019-05-20 | Call content processing method, electronic equipment and storage medium |
PCT/CN2020/090956 WO2020233556A1 (en) | 2019-05-20 | 2020-05-19 | Call content processing method and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910416825.4A CN111970401B (en) | 2019-05-20 | 2019-05-20 | Call content processing method, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111970401A CN111970401A (en) | 2020-11-20 |
CN111970401B true CN111970401B (en) | 2022-04-05 |
Family
ID=73357796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910416825.4A Active CN111970401B (en) | 2019-05-20 | 2019-05-20 | Call content processing method, electronic equipment and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111970401B (en) |
WO (1) | WO2020233556A1 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115309312A (en) * | 2021-04-21 | 2022-11-08 | 花瓣云科技有限公司 | Content display method and electronic equipment |
CN115268736A (en) * | 2021-04-30 | 2022-11-01 | 华为技术有限公司 | Interface switching method and electronic equipment |
CN113660375B (en) * | 2021-08-11 | 2023-02-03 | 维沃移动通信有限公司 | Call method and device and electronic equipment |
CN113672152B (en) * | 2021-08-11 | 2024-08-09 | 维沃移动通信(杭州)有限公司 | Display method and device |
CN113761881A (en) * | 2021-09-06 | 2021-12-07 | 北京字跳网络技术有限公司 | Wrong-word recognition method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103379013A (en) * | 2012-04-12 | 2013-10-30 | 腾讯科技(深圳)有限公司 | Geographic information providing method and system based on instant messaging |
CN105279202A (en) * | 2014-07-25 | 2016-01-27 | 中兴通讯股份有限公司 | Information retrieval method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8223932B2 (en) * | 2008-03-15 | 2012-07-17 | Microsoft Corporation | Appending content to a telephone communication |
CN105550235A (en) * | 2015-12-07 | 2016-05-04 | 小米科技有限责任公司 | Information acquisition method and information acquisition apparatuses |
KR102495523B1 (en) * | 2016-02-04 | 2023-02-03 | 삼성전자 주식회사 | Method for processing voice instructions and electronic device supporting the same |
CN106713628A (en) * | 2016-12-14 | 2017-05-24 | 北京小米移动软件有限公司 | Method and device for searching information stored in mobile terminal and mobile terminal |
CN106777320A (en) * | 2017-01-05 | 2017-05-31 | 珠海市魅族科技有限公司 | Call householder method and device |
CN107547717A (en) * | 2017-08-01 | 2018-01-05 | 联想(北京)有限公司 | Information processing method, electronic equipment and computer-readable storage medium |
-
2019
- 2019-05-20 CN CN201910416825.4A patent/CN111970401B/en active Active
-
2020
- 2020-05-19 WO PCT/CN2020/090956 patent/WO2020233556A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103379013A (en) * | 2012-04-12 | 2013-10-30 | 腾讯科技(深圳)有限公司 | Geographic information providing method and system based on instant messaging |
CN105279202A (en) * | 2014-07-25 | 2016-01-27 | 中兴通讯股份有限公司 | Information retrieval method and device |
Also Published As
Publication number | Publication date |
---|---|
WO2020233556A1 (en) | 2020-11-26 |
CN111970401A (en) | 2020-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113794800B (en) | Voice control method and electronic equipment | |
CN109814766B (en) | Application display method and electronic equipment | |
CN110597512B (en) | Method for displaying user interface and electronic equipment | |
CN110138959B (en) | Method for displaying prompt of human-computer interaction instruction and electronic equipment | |
CN110119296B (en) | Method for switching parent page and child page and related device | |
CN114461111B (en) | Function starting method and electronic equipment | |
CN111669459B (en) | Keyboard display method, electronic device and computer readable storage medium | |
CN111970401B (en) | Call content processing method, electronic equipment and storage medium | |
CN110910872A (en) | Voice interaction method and device | |
CN111078091A (en) | Split screen display processing method and device and electronic equipment | |
CN113994317A (en) | User interface layout method and electronic equipment | |
CN112130714B (en) | Keyword search method capable of learning and electronic equipment | |
CN113961157A (en) | Display interaction system, display method and equipment | |
CN114363462A (en) | Interface display method and related device | |
CN112068907A (en) | Interface display method and electronic equipment | |
CN113448658A (en) | Screen capture processing method, graphical user interface and terminal | |
CN113852714A (en) | Interaction method for electronic equipment and electronic equipment | |
WO2021196980A1 (en) | Multi-screen interaction method, electronic device, and computer-readable storage medium | |
WO2022033432A1 (en) | Content recommendation method, electronic device and server | |
CN112740148A (en) | Method for inputting information into input box and electronic equipment | |
CN114064160A (en) | Application icon layout method and related device | |
CN112416984A (en) | Data processing method and device | |
WO2022089276A1 (en) | Collection processing method and related apparatus | |
CN116339569A (en) | Split screen display method, folding screen device and computer readable storage medium | |
CN116339568B (en) | Screen display method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |