CN113194024A - Information display method and device and electronic equipment - Google Patents
Information display method and device and electronic equipment Download PDFInfo
- Publication number
- CN113194024A CN113194024A CN202110302991.9A CN202110302991A CN113194024A CN 113194024 A CN113194024 A CN 113194024A CN 202110302991 A CN202110302991 A CN 202110302991A CN 113194024 A CN113194024 A CN 113194024A
- Authority
- CN
- China
- Prior art keywords
- information
- input
- image
- category
- interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L51/00—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
- H04L51/07—User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
- H04L51/18—Commands or executable codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses an information display method and device and electronic equipment, and belongs to the technical field of communication. The method can solve the problem that the operation process of obtaining the information in the image of the first interface is complicated, and comprises the following steps: receiving a first input to a first image in a first interface; in response to a first input, displaying at least one category identifier, each category identifier being used to trigger extraction of information associated with a category in the first image; receiving a second input of a target category identification in the at least one category identification, wherein the target category identification is used for triggering and extracting information associated with the target category in the first image; in response to the second input, first information is displayed, the first information being information extracted from the first image based on the target. The method and the device are suitable for scenes for extracting information from the images of the interface.
Description
Technical Field
The application belongs to the technical field of communication, and particularly relates to an information display method and device and electronic equipment.
Background
At present, during the conversation process through the electronic equipment, the user often receives the image sent by the other party.
Specifically, the electronic device may receive an image through an application program, and may display the image on a session interface (hereinafter, referred to as session interface 1) of the application program. Wherein, if the user needs to perform secondary editing on the image in the session interface 1, for example, needs to acquire specific information in one image (hereinafter referred to as image a) in the session interface 1, then: the user can trigger the electronic device to store the image a in the gallery, search the image a in the gallery, and trigger the electronic device to display the image a through the gallery. In this way, the user can press the image a for a long time to trigger the electronic device to recognize the image a and display the recognized information, so that the user can trigger the electronic device to copy and paste the information, and then the user can delete the information except the specific information in the pasted information, so that the specific information in the image a can be acquired.
However, according to the method, since the specific information of the image in the session interface can be acquired only after the series of operations are required to be performed, the operation process of acquiring the information in the image in the session interface is complicated.
Disclosure of Invention
The embodiment of the application aims to provide an information display method, an information display device and electronic equipment, and the problem that the operation process of acquiring information in an image of a session interface is complex can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an information display method, including: receiving a first input to a first image in a first interface; and in response to a first input, displaying at least one category identifier, each category identifier for triggering extraction of information associated with a category in the first image; receiving a second input of a target category identification in the at least one category identification, wherein the target category identification is used for triggering and extracting information associated with the target category in the first image; in response to the second input, first information is displayed, the first information being information extracted from the first image based on the target category.
In a second aspect, an embodiment of the present application provides an information display apparatus, which may include: the device comprises a receiving module and a display module. The receiving module is used for receiving a first input of a first image in the first interface; the display module is used for responding to the first input received by the receiving module and displaying at least one category identifier, and each category identifier is used for triggering the extraction of information related to one category in the first image; the receiving module is further used for receiving a second input of a target category identifier in the at least one category identifier, wherein the target category identifier is used for triggering and extracting information related to a target category in the first image; and the display module is also used for responding to the second input received by the receiving module and displaying the first information, wherein the first information is information extracted from the first image based on the target category.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the information display method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the information display method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the information display method according to the first aspect.
In an embodiment of the present application, a first input to a first image in a first interface may be received; and in response to a first input, displaying at least one category identifier, each category identifier being usable to trigger extraction of information associated with a category in the first image; receiving a second input of a target category identifier in the at least one category identifier, wherein the target category identifier is used for triggering and extracting information of a target category in the first image; and in response to a second input, displaying first information, the first information being information extracted from the first image based on the target category. According to the scheme, when first input of a user to the first image in the first interface is received, at least one category identifier can be displayed, and each category identifier is used for triggering extraction of information related to one category in the first image, so that the user can directly input a target category identifier which meets the actual extraction requirement in the at least one category identifier, namely information related to a specific category in the image of the first interface can be triggered to be extracted by categories, the extracted information is displayed, the first image does not need to be stored, the information in the first image is identified through a gallery, other types of information except the first information in the identified information is deleted, and therefore the operation process of obtaining the information in the image of the first interface can be simplified.
Drawings
Fig. 1 is a schematic diagram of an information display method provided in an embodiment of the present application;
fig. 2 is one of schematic interfaces of an application of an information display method according to an embodiment of the present disclosure;
fig. 3 is a second schematic interface diagram of an application of the information display method according to the embodiment of the present application;
fig. 4 is a third schematic interface diagram of an application of the information display method according to the embodiment of the present application;
fig. 5 is a fourth schematic interface diagram of an application of the information display method according to the embodiment of the present application;
fig. 6 is a fifth schematic interface diagram of an application of the information display method according to the embodiment of the present application;
fig. 7 is a sixth schematic interface diagram of an application of the information display method according to the embodiment of the present application;
FIG. 8 is a schematic view of an information display device according to an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
It should be noted that, the marks in the embodiments of the present application are used to indicate words, symbols, images, and the like of information, and a control or other container may be used as a carrier for displaying information, including but not limited to a word mark, a symbol mark, and an image mark.
The embodiment of the application provides an information display method, an information display device and electronic equipment, which can receive first input of a first image in a first interface; and in response to a first input, displaying at least one category identifier, each category identifier being usable to trigger extraction of information associated with a category in the first image; receiving a second input of a target category identifier in the at least one category identifier, wherein the target category identifier is used for triggering and extracting information of a target category in the first image; and in response to a second input, displaying first information, the first information being information extracted from the first image based on the target category. According to the scheme, when first input of a user to the first image in the first interface is received, at least one category identifier can be displayed, and each category identifier is used for triggering extraction of information related to one category in the first image, so that the user can directly input a target category identifier which meets the actual extraction requirement in the at least one category identifier, namely information related to a specific category in the image of the first interface can be triggered to be extracted by categories, the extracted information is displayed, the first image does not need to be stored, the information in the first image is identified through a gallery, other types of information except the first information in the identified information is deleted, and therefore the operation process of obtaining the information in the image of the first interface can be simplified.
The display method, the display device, and the electronic device provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides an information display method, which may include steps 101 to 104 described below. The information display method provided by the embodiment of the present application is exemplarily described below by taking the information display device as an example to execute the information display method.
Optionally, in this embodiment of the application, the first interface may be a session interface or any other interface including an image (for example, an image display interface in an album application), which may be specifically determined according to actual use requirements, and this embodiment of the application is not limited.
Optionally, in this embodiment of the application, when the first interface is a session interface, the first interface may be a first interface in any application program, for example, the first interface may be a session interface in a chat application, or a session interface in a shopping application.
Optionally, in this embodiment of the application, when the first interface is a session interface, the first image may be an image received by the information display device through the first interface, or may be an image sent through the first interface.
In order to better describe the information display method provided by the embodiment of the present invention, in the following embodiments, unless otherwise specified, the first interface is not limited, that is, the first interface may be any interface including an image.
Optionally, in this embodiment of the application, the first input may be a long-press input, a multi-click (2 times or more) input, and the like of the user on the first image, and may be specifically determined according to an actual use requirement, and this embodiment of the application is not limited.
Wherein each class identification may be used to trigger extraction of information associated with a class in the first image. Different class identifications may be used to trigger extraction of information associated with different classes in the first image.
In the embodiment of the present application, the information associated with one category refers to: the information category is information of the category.
In the embodiment of the application, one category identification corresponds to one category.
Alternatively, the information in the embodiment of the present application may be any possible information such as characters (at least one of letters, numbers, chinese characters, and symbols), images, and the like.
Optionally, in this embodiment of the application, when the information is a character, the category may be: address class, name class, number class, amount class, event class, etc.
For example, "building" in district d of city, b, and c of province a "is address information; "Zhang three" is name information; "1889 xxx 0124" is number class information; "Y3.00" is the amount class information.
The target category identification can be used for triggering extraction of information associated with the target category in the first image.
It can be understood that, in the embodiment of the present application, the information associated with the target category in the first image is information extracted according to the user requirement.
Optionally, in this embodiment of the application, the second input may be a click input of the user on the target category identifier or a drag input of the target identification identifier, which may be specifically determined according to actual use requirements, and this embodiment of the application is not limited.
And 104, responding to the second input by the information display device, and displaying the first information.
Wherein the first information is information extracted from the first image based on the object class.
In this embodiment, after receiving a second input of the target category identifier by the user, the information display device may identify, in response to the second input, information in the first image by using an image recognition technology to obtain second information. Then, the information display device may extract the first information of which the category is the target category from the second information and display the first information. Therefore, the user can copy the first information according to the actual use requirement or trigger the first information to be displayed in the input box of the first interface. Thus, the user can quickly trigger the information display device to acquire the information (i.e., the first information) satisfying the use requirement of the first image.
It can be understood that, in the embodiment of the present application, a neural network model for classifying information may be trained in advance, and then the information display apparatus may input the second information to the trained neural network model to classify each piece of information in the second information by category, so as to extract information associated with a target category from the second information. Of course, in actual implementation, any other possible manner may also be adopted to extract the first information from the second information, which may be determined according to actual usage requirements, and the embodiment of the present application is not limited.
Optionally, in this embodiment of the application, the information display device may display the first information in any possible form, such as a text form, an image form, and the like, which may be determined specifically according to actual use requirements, and this embodiment of the application is not limited.
In the embodiment of the application, when a first input of a user to a first image in a first interface is received, at least one category identifier can be displayed, and each category identifier is used for triggering extraction of information associated with one category in the first image, so that the user can directly input a target category identifier meeting actual extraction requirements in the at least one category identifier, that is, information associated with a specific category in the image of the first interface can be triggered to be extracted by categories, and the extracted information is displayed, without storing the first image, recognizing information in the first image through a gallery, and deleting other types of information except the first information in the recognized information, and therefore, the operation process of obtaining the information in the image of the first interface can be simplified.
Optionally, in this embodiment of the application, the first input may include a first sub-input and a second sub-input. The step 102 can be realized by the steps 102a and 102b described below.
Step 102a, the information display device responds to a first sub-input of the first image by a user and displays an extraction control.
And the extraction control is used for triggering to enter an information extraction mode.
In the embodiment of the application, after the information display device receives the first sub-input of the user, the shortcut menu can be displayed in response to the first sub-input, and the shortcut menu comprises the extraction control.
Illustratively, as shown in fig. 2 (a), the user may long-press (i.e., a first sub-input) on the first image 21 of the conversation interface 20, as shown in fig. 2 (b), the information display apparatus may display a hover card 22 (i.e., a shortcut menu) on the first image 21, the hover card 22 including an extraction control (an "extraction" control shown in fig. 2 (b)).
Further, the shortcut menu may further include: the editing control (e.g., "editing" control shown in (b) of fig. 2) for triggering and controlling the first image to enter the editable mode, the sharing control for triggering and sharing the first image, the downloading control for triggering and downloading the first image, and the like may be specifically determined according to actual use requirements, and the embodiment of the present application is not limited.
Optionally, in this embodiment of the application, the first sub-input may be a long-press input, or a multi-click (for example, two or more clicks) input on the first image, which may be determined according to actual usage requirements, and this embodiment of the application is not limited.
Step 102b, the information display device responds to the second sub-input of the user to the extraction control, and displays at least one category identification.
Optionally, in this embodiment of the application, the second sub-input may be a long-press input, a re-press input, or a multi-click (for example, two or more clicks) input on the extraction control by the user, which may be determined specifically according to actual usage requirements, and this embodiment of the application is not limited.
Illustratively, as shown in (b) of fig. 2, the user may click on the "extraction" control (i.e., the second sub-input), then the information display apparatus may control the first image to be in the information extraction mode in response to the second sub-input, and as shown in fig. 3, the information display apparatus may display 3 category identifiers of the at least one identifier below the first image 21, respectively: an "address class" category identification 23, a "name class" category identification 24, a "number class" category identification 25 and a "more" identification 26. The user may trigger the information display device to display other category identifications than the 3 category identifications among the at least one category identification, for example, the "money class" category identification, by inputting (e.g., clicking on) the "more" identification 26.
For the description of the category identifier, reference may be specifically made to the related description of the category identifier in step 102, and details are not repeated here to avoid repetition.
Further, as shown in fig. 3, when the first image 21 is in the information extraction mode, the information display device may further display an exit identifier 27 on the first image 21, and the user may trigger the information display device to control the first image to exit the information extraction mode and cancel displaying the at least one category identifier by inputting the exit identifier 27.
In the embodiment of the application, the user can firstly trigger the information display device to display the extraction control through the first sub-input, and then trigger and display at least one category identifier through the second sub-input of the extraction control, so that the operation process of extracting the information in the first image can be improved, and misoperation can be avoided.
Optionally, in this embodiment of the application, it is assumed that the first interface is a session interface, and the first image in the first interface includes information whose M categories are target categories, the first information may be at least one of the M information, and M may be a positive integer.
Optionally, in this embodiment of the present application, one message may be an address, a number, or a name. For example, as shown in (a) of fig. 4, the first image 21 in the session interface 20 includes 2 pieces of address class information, which are respectively: 'Anderm street X number of Yuhua Tai district of Nanjing city, Jiangsu province' and 'Beijing district V mansion of Nibo city, Zhejiang province'. Namely, the user can trigger the information display device to display at least part of the information of the target category in the first image through the input of the target identification, so that the flexibility of extracting the information in the image of the conversation interface can be further improved.
In practical implementation, the first image may not include information related to the target category, and in this case, the first image does not include the first information.
Optionally, in this embodiment of the application, for the same target category identifier, the input form of the second input is different, and the extracted first information may be different.
The information display method provided by the embodiment of the present application is exemplarily described below by taking the second input as a click input of the user on the target identification (i.e., mode 1) and the second input as a movement input of the user on the target identification (i.e., mode 2), respectively.
In the method 1, when the second input is a click input of the user on the target identification mark, after the information display device receives the second input, the information in the first image may be identified in response to the second input, and all information whose category is the target category may be extracted from the identification result (for example, the above-mentioned second information) and displayed. That is, in this embodiment, the user can extract all information associated with the target category in the first image by one key operation of the input trigger information display device. This can improve the efficiency of extracting information in the image of the first interface.
Mode 1 will be described below by way of example with reference to fig. 4.
Illustratively, as shown in fig. 4 (a), the information display device displays a conversation interface 20, the conversation interface 20 includes an image 21, the user can click on the "address class" category identifier 23, that is, the information display device receives a second input from the user, and then the information display device can identify the information in the first image 21 in response to the second input, and as shown in fig. 4 (b), the information display device can display a floating card 29 above the input frame 28 of the conversation interface 20 and display the identified 2 pieces of address class information in the floating card 29 in text form, where the 2 pieces of address class information are respectively: 'Anderm street X number of Yuhua Tai district of Nanjing city, Jiangsu province' and 'Beijing district V mansion of Nibo city, Zhejiang province'. That is, the information display device may display all the information of which the category in the first image is the target category in the text form.
Optionally, in this embodiment of the application, as shown in fig. 4 (b), the information display device may further display an exit identifier (i.e., an "X" number on the floating card 29) on the floating card 29, and the user may trigger the information display device to cancel displaying the floating card 29 by inputting the exit identifier.
Optionally, in this embodiment of the present application, when the first information includes all information associated with the target category in the first image, that is, the above M pieces of information, after step 104, the information display method provided in this embodiment of the present application may further include step 105 described below.
And step 105, the information display device displays the target control in the area corresponding to each piece of information in the first information respectively.
In an embodiment of the application, the target control may include at least one of a copy control and an extract child control.
The extracting sub-control can be used for triggering display of information corresponding to the extracting sub-control in an input box of the first interface, and the copying control can be used for triggering copying of information corresponding to the copying control.
Illustratively, as shown in fig. 4 (b), the information display apparatus displays a hover card 29 above the input box of the conversation interface 20, and displays two pieces of address class information in a list form in the hover card 29, and displays an extraction sub-control and a copy control in a region corresponding to each piece of address class information. If the user clicks on the copy control corresponding to the address 1, the information display device may copy the address 1 (i.e., "number X of andemen street in rainflower platform area of Nanjing, Jiangsu province"); if the user clicks on the copy control corresponding to address 2, the information display device may copy address 2 (i.e., "Beijing district V edifice, Ningbo, Zhejiang). Accordingly, if the user clicks on the extraction sub-control corresponding to the address 1, the information display apparatus may display the address 1 in the input box of the conversation interface 20; if the user clicks on the extraction sub-control corresponding to address 2, the information display apparatus may display address 2 in the input box of the conversation interface 20. Therefore, the user can flexibly input the information in the first information into the input box of the first interface or copy the information in the first information according to the actual use requirement of the user, and the operation flexibility can be improved.
In the embodiment of the application, the information display device can display the target control in the area corresponding to each piece of information in the first information, so that a user can flexibly input the information in the first information into the input box of the first interface or copy the information in the first information according to the actual use requirement of the user, and the operation flexibility can be improved.
In the method 2, when the second input is an input in which the user moves the target identifier to one region (for example, a first region described below) of the first image, the step 104 described above may be specifically implemented by the step 104a described below.
Step 104a, the information display device responds to the second input, identifies information in the first area of the first image, and displays the first information in text form.
It can be understood that, in the mode 2, after the information display device receives the second input, the first region in the first image may be determined first, and then the information of the first region may be identified, and it is not necessary to identify the information of the other regions in the first image except the first region, so that the data processing amount of information identification may be greatly reduced, and thus, the time required for identifying information may be reduced, the identification efficiency may be improved, and accurate identification of information may be achieved.
Optionally, in this embodiment of the application, each of the at least one category identifier may be a draggable hover button. For example, when the information display device receives a user input (e.g., a second sub-input) to the extraction control, the information display device may control the first image to enter an information extraction mode and hover display the at least one category identifier in an area adjacent to the first image.
Optionally, in this embodiment of the application, when the information display apparatus receives an input of the user to the extraction control, the information display apparatus may further display the first image in an enlarged manner.
The information display method provided by the embodiment of the present application is exemplarily described below with reference to fig. 5.
Illustratively, as shown in fig. 5 (a), the user may long-press (i.e., a first sub-input) on the first image 51 of the conversation interface 50, and as shown in fig. 5 (b), the information display apparatus may display a flyer card 52 (i.e., a shortcut menu) on the first image 51, the flyer card 52 including an extraction control (e.g., "extraction" control shown in fig. 5 (b)). The user may click on the extraction control (i.e., the second sub-input), and then as shown in (c) of fig. 5, the information display apparatus may enlarge and display the first image 51, and display an exit identifier in the upper right corner of the first image 51, and display at least one category identifier in a right blank area of the first image in a floating manner, and the at least one category identifier may include: the category identification of "address category", the category identification of "name category" and the category identification of "number category".
Further, if the user desires to obtain the information related to the "shipping address 1" in the first image, the user may move the "address class" category identifier to the area where the "shipping address 1" is located, that is, the information display apparatus receives the second input of the target category identifier from the user, so that the information display apparatus may display the address class information related to the "shipping address 1" in the input box of the conversation interface 50 in the form of text and restore the display position of the "address class" category identifier, as shown in (d) of fig. 5.
It should be noted that the above example is merely illustrative, and in actual implementation, the information display device may display the address information related to the "delivery address 1" on one floating card, and then display the address information in the input box when the user inputs the address information displayed on the floating card. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
For other descriptions in the mode 2, reference may be specifically made to the related description of the above mode 1, and details are not repeated here to avoid repetition.
In the embodiment of the application, the information display device can accurately extract the information associated with the target category in the first area of the first image without performing character recognition on other areas except the first area in the first image, so that the data processing amount of information recognition can be reduced, the recognition time length is shortened, and the recognition efficiency is improved. Optionally, in this embodiment of the application, when the user needs to extract information associated with a certain category from the multiple images of the first interface, the user may first trigger the information display device to associate the multiple images. In this way, when the information display device receives a first input of the user to any one of the plurality of associated images, the information display device may display the at least one category identifier, and each category identifier is used to trigger extraction of information associated with one category in the plurality of images.
For example, in this embodiment of the present application, assuming that the first image includes at least two images in the first interface, before step 101, the information display method provided in this embodiment of the present application may further include step 106 and step 107 described below.
Step 106, the information display device receives a third input of the at least two images from the user.
And step 107, the information display device responds to the third input, associates at least two images and marks the at least two images respectively.
The first input is specifically input to any one of at least two images; the first information is all information associated with the target category in the at least two images.
Optionally, in this embodiment of the application, the third input may include a third sub-input and a fourth sub-input, where the third sub-input is used to trigger the first interface to enter the associable mode, and the fourth sub-input is used to trigger the association of the at least two images in the first interface.
Illustratively, as shown in (a) of fig. 6, the user may input to the image 2 of the session interface 60, as shown in (b) of fig. 6, the information display apparatus may display a shortcut menu, which may include an extraction control and an association control, and then the user may click on the association control, as shown in (c) of fig. 6, the information display apparatus may display an "association selected image" control 61 on the session interface 60 and display a selection box 62 in a region corresponding to each image of the session interface, that is, the information display apparatus controls the session interface to enter an image associable mode. As can be seen, the third sub-input can include an input to an image in the conversation interface and an input to an associated control.
Optionally, in this embodiment of the application, after the information display device controls the image in the first interface to enter the associable mode, the user may select at least two images to be associated through the fourth sub-input.
Illustratively, as shown in fig. 7 (a), the user may sequentially click on selection boxes corresponding to the images 1 and 2 to trigger the information display apparatus to select the images 1 and 2. After the selection of the to-be-associated image is completed, the user may input the "associate selected image" control 61 to trigger the information display apparatus to associate the image 1 and the image 2 (i.e., at least two images) and mark the image 1 and the image 2, respectively, for example, as shown in (b) of fig. 7, the information display apparatus may display a "star" in the corresponding areas of the image 1 and the image 2, respectively, so that the user can distinguish the already-associated images in the first interface. It is understood that, in the embodiment of the present application, the fourth sub-input includes an input of selecting an image to be associated and an input of the "associate selected image" control.
Optionally, in this embodiment of the application, the information display apparatus may bind the identifiers of the at least two images to implement association of the at least two images.
In an embodiment of the present application, after associating at least two images, a user may perform a first input on any one of the at least two images (hereinafter, referred to as a target image) to trigger an information display device to display a shortcut menu, where the shortcut menu may include: extracting the control and canceling the association control. If the user inputs (i.e., the second input) to the extraction control, the information display device may display at least two category identifications, such that the user may input (i.e., the second input) to a target category identification of the at least two category identifications to trigger the information display device to recognize information associated with the target category identification in the at least two images and display all information associated with the target category by the category in the at least two images. Therefore, when the user needs to extract the information related to the same category in the multiple images, the user can trigger and associate the multiple images firstly, and then input (namely the first input and the second input) for triggering and extracting the information is executed, so that the information related to the same category in the multiple images can be extracted quickly.
Optionally, in this embodiment of the application, when the user triggers the information display device to extract information associated with a certain category from the at least two images associated with the user through the first input and the second input, the information display device may extract the information associated with the category from the at least two images, perform deduplication processing on the extracted information, and display the deduplicated information; that is, the information display device may perform aggregate display on the extracted information, that is, the first information is information after deduplication processing.
Optionally, in this embodiment of the application, if the user inputs the disassociation control, the information display apparatus may disassociate the target image from the other images of the at least two images except the target image. For example, assuming that at least two images are image 1, image 2, and image 3, the user may long press on image 1 to trigger display of a shortcut menu including a disassociation control, and if the user clicks on the disassociation control, the information display apparatus may disassociate image 1 and keep associating image 2 and image 3; i.e. the user may trigger the information display device to extract information associated with the same category in the images 2 and 3 by means of a first input and a second input.
In the embodiment of the application, the information display device may associate at least two images in the session boundary based on the third input of the user, so that the user may trigger the information display device to extract all information associated with the target category in the at least two images through one first input and one second input, thereby further improving the convenience of extracting the information associated with the specific category in the first interface.
It should be noted that, in the information display method provided in the embodiment of the present application, the execution main body may be an information display device, or a control module for executing the information display method in the information display device. In the embodiment of the present application, an information display method executed by an information display device is taken as an example, and the information display method provided in the embodiment of the present application is described.
As shown in fig. 8, an embodiment of the present application provides an information display apparatus 80, where the information display apparatus 80 may include: a receiving module 81 and a display module 82. A receiving module 81, which may be used to receive a first input to a first image in a first interface; a display module 82, configured to display at least one category identifier in response to the first input received by the receiving module 81, where each category identifier may be used to trigger extraction of information associated with a category in the first image; the receiving module 81 may be further configured to receive a second input of a target category identifier in the at least one category identifier, where the target category identifier may be used to trigger extraction of information associated with the target category in the first image; the display module 82 may be further configured to display, in response to the second input received by the receiving module 81, first information, which is information extracted from the first image based on the target category.
In the information display device provided by the embodiment of the application, when the information display device receives a first input of a user to a first image in a first interface, at least one category identifier can be displayed, and each category identifier is used for triggering extraction of information associated with one category in the first image, so that the user can directly input a target category identifier meeting an actual extraction requirement in the at least one category identifier, that is, information associated with a specific category in the image of the first interface can be triggered to be extracted by categories, the extracted information is displayed, the first image does not need to be stored, information in the first image is identified through a gallery, and other types of information except the first information in the identified information is deleted, and therefore, the operation process of obtaining the information in the image of the first interface can be simplified.
Optionally, in this embodiment of the present application, the first input includes a first sub-input and a second sub-input; the display module 82 may be specifically configured to display an extraction control in response to a first sub-input to the first image; and may be configured to display at least one category identification in response to a second sub-input to the extraction control, which may be configured to trigger entry into an information extraction mode.
Optionally, in this embodiment of the application, the first image may include M pieces of information associated with the target category; the first information may be at least one of M information, and M may be a positive integer.
Optionally, in this embodiment of the application, the first information may include the M pieces of information; the display module 82 may be further configured to display a target control in an area corresponding to each piece of information in the first information after the identified first information is displayed; the target control may include at least one of a copy control and an extract child control; the extracting sub-control can be used for triggering to display information corresponding to the extracting sub-control in an input box of the first interface, and the copying control is used for triggering to copy the information corresponding to the copying control.
Optionally, in this embodiment of the application, the second input is an input of moving the target category identifier to the first area of the first image; the display module 82 may be specifically configured to identify information in the first area, and display the identified first information in a text form; the first information is all information of which the category is related to the target category in the first area.
Optionally, in this embodiment of the present application, the information display apparatus may further include an association module; the first image includes at least two images in the first interface; the receiving module 81 may be further configured to receive a third input for at least two images before receiving the first input for the first image in the first interface; an associating module, configured to associate the at least two images and mark the at least two images respectively in response to a third input received by the receiving module 81; the first input is specifically input to any one of at least two images; the first information is all information associated with the target category in the at least two images.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
The information display device 80 in the embodiment of the present application may be an electronic device, or may be a component, an integrated circuit, or a chip in an electronic device. The electronic device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The information display device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The information display device provided in the embodiment of the present application can implement each process implemented by the information display device in the method embodiments of fig. 1 to fig. 7, and is not described here again to avoid repetition.
As shown in fig. 9, an electronic device 200 according to an embodiment of the present application is further provided, which includes a processor 202, a memory 201, and a program or an instruction stored in the memory 201 and executable on the processor 202, and when the program or the instruction is executed by the processor 202, the process of the information display method embodiment is implemented, and the same technical effect can be achieved, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 may be configured to receive a first input on a first image in the first interface; a display unit 1006, which may be configured to display at least one category identifier in response to a first input received by the user input unit 1007, where each category identifier may be used to trigger extraction of information associated with a category in the first image; the user input unit 1007 may be further configured to receive a second input of a target category identifier in the at least one category identifier, where the target category identifier may be used to trigger extraction of information associated with the target category in the first image; the display unit 1006 may be further configured to display first information in response to a second input received by the user input unit 1007, where the first information is information extracted from the first image based on the target category.
In the information display device provided by the embodiment of the application, when the information display device receives a first input of a user to a first image in a first interface, at least one category identifier can be displayed, and each category identifier is used for triggering extraction of information associated with one category in the first image, so that the user can directly input a target category identifier meeting an actual extraction requirement in the at least one category identifier, that is, information associated with a specific category in the image of the first interface can be triggered to be extracted by categories, the extracted information is displayed, the first image does not need to be stored, information in the first image is identified through a gallery, and other types of information except the first information in the identified information is deleted, and therefore, the operation process of obtaining the information in the image of the first interface can be simplified.
Optionally, in this embodiment of the present application, the first input includes a first sub-input and a second sub-input; the display unit 1006 may be specifically configured to display an extraction control in response to a first sub-input to the first image; and may be configured to display at least one category identification in response to a second sub-input to the extraction control, which may be configured to trigger entry into an information extraction mode.
Optionally, in this embodiment of the application, the first image may include M pieces of information associated with the target category; the first information may be at least one of M information, and M may be a positive integer.
Optionally, in this embodiment of the application, the first information may include the M pieces of information; the display unit 1006 may be further configured to display, after the identified first information is displayed, a target control in an area corresponding to each piece of information in the first information; the target control may include at least one of a copy control and an extract child control; the extracting sub-control can be used for triggering to display information corresponding to the extracting sub-control in an input box of the first interface, and the copying control is used for triggering to copy the information corresponding to the copying control.
Optionally, in this embodiment of the application, the second input is an input of moving the target category identifier to the first area of the first image; the processor 1010 may be specifically configured to identify information in the first area, and display the identified first information in a text form; the first information is all information of which the category is related to the target category in the first area.
Optionally, in this embodiment of the present application, the first image includes at least two images in the first interface; the user input unit 1007 may be further configured to receive a third input for at least two images before receiving the first input for the first image in the first interface; a processor 1010, which may be configured to associate at least two images and mark the at least two images, respectively, in response to a third input received by the user input unit 1007; the first input is specifically input to any one of at least two images; the first information is all information associated with the target category in the at least two images.
The beneficial effects of the various implementation manners in this embodiment may specifically refer to the beneficial effects of the corresponding implementation manners in the above method embodiments, and are not described herein again to avoid repetition.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned information display method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above information display method embodiment, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. An information display method, characterized in that the method comprises:
receiving a first input to a first image in a first interface;
displaying at least one category identifier in response to the first input, each category identifier being used to trigger extraction of information associated with a category in the first image;
receiving a second input of a target category identification in the at least one category identification, wherein the target category identification is used for triggering extraction of information associated with a target category in the first image;
in response to the second input, displaying first information, the first information being information extracted from the first image based on the target category.
2. The method of claim 1, wherein the first input comprises a first sub-input and a second sub-input;
said displaying at least one category identification in response to said first input, comprising:
in response to the first sub-input to the first image, displaying an extraction control for triggering entry into an information extraction mode;
displaying the at least one category identification in response to the second sub-input to the extraction control.
3. The method of claim 1, wherein the first image includes M pieces of information associated with the target class; the first information is at least one of M pieces of information, and M is a positive integer.
4. The method of claim 3, wherein the first information comprises the M pieces of information;
after the displaying the first information, the method further comprises:
respectively displaying a target control in an area corresponding to each piece of information in the first information, wherein the target control comprises at least one of a copying control and an extracting sub-control;
the extracting sub-control is used for triggering display of information corresponding to the extracting sub-control in an input box of the first interface, and the copying control is used for triggering copying of the information corresponding to the copying control.
5. The method according to any one of claims 1 to 4, wherein the second input is an input to move the target class identifier to a first region of the first image;
the displaying the first information includes:
identifying information in the first area and displaying the identified first information in text form;
wherein the first information is all information associated with the target category in the first area.
6. The method of claim 1 or 2, wherein the first image comprises at least two images in the first interface;
prior to the receiving the first input to the first image in the first interface, the method further comprises:
receiving a third input to the at least two images;
in response to the third input, associating the at least two images and labeling the at least two images respectively;
wherein the first input is specifically an input to any one of the at least two images; the first information is all information associated with the target category in the at least two images.
7. An information display apparatus, characterized in that the apparatus comprises: the device comprises a receiving module and a display module;
the receiving module is used for receiving a first input of a first image in a first interface;
the display module is used for responding to the first input received by the receiving module and displaying at least one category identifier, and each category identifier is used for triggering the extraction of information associated with one category in the first image;
the receiving module is further configured to receive a second input of a target category identifier in the at least one category identifier, where the target category identifier is used to trigger extraction of information associated with a target category in the first image;
the display module is further configured to display first information in response to the second input received by the receiving module, where the first information is extracted from the first image based on the target category.
8. The apparatus of claim 7, wherein the first input comprises a first sub-input and a second sub-input;
the display module is specifically configured to display an extraction control in response to the first sub-input to the first image; and to display the at least one category identification in response to the second sub-input to the extraction control, the extraction control to trigger entry into an information extraction mode.
9. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the information display method according to any one of claims 1 to 6.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the information display method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110302991.9A CN113194024B (en) | 2021-03-22 | 2021-03-22 | Information display method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110302991.9A CN113194024B (en) | 2021-03-22 | 2021-03-22 | Information display method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113194024A true CN113194024A (en) | 2021-07-30 |
CN113194024B CN113194024B (en) | 2023-04-18 |
Family
ID=76973582
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110302991.9A Active CN113194024B (en) | 2021-03-22 | 2021-03-22 | Information display method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113194024B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113835598A (en) * | 2021-09-03 | 2021-12-24 | 维沃移动通信(杭州)有限公司 | Information acquisition method and device and electronic equipment |
WO2023051384A1 (en) * | 2021-09-29 | 2023-04-06 | 维沃移动通信有限公司 | Display method, information sending method, and electronic device |
WO2023056900A1 (en) * | 2021-10-08 | 2023-04-13 | 北京字跳网络技术有限公司 | Information display method and apparatus, and electronic device and storage medium |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104090648A (en) * | 2014-05-21 | 2014-10-08 | 中兴通讯股份有限公司 | Data entry method and terminal |
CN105589942A (en) * | 2015-12-16 | 2016-05-18 | 北京金山安全软件有限公司 | Picture display method and device and electronic equipment |
US20180109732A1 (en) * | 2016-10-19 | 2018-04-19 | Lg Electronics Inc. | Mobile terminal |
CN108108249A (en) * | 2016-11-25 | 2018-06-01 | 北京嘀嘀无限科技发展有限公司 | Data inputting method and device |
CN109635683A (en) * | 2018-11-27 | 2019-04-16 | 维沃移动通信有限公司 | Method for extracting content and terminal device in a kind of image |
CN109857494A (en) * | 2018-12-24 | 2019-06-07 | 维沃移动通信有限公司 | A kind of message prompt method and terminal device |
US20190369825A1 (en) * | 2018-06-05 | 2019-12-05 | Samsung Electronics Co., Ltd. | Electronic device and method for providing information related to image to application through input unit |
WO2020051881A1 (en) * | 2018-09-14 | 2020-03-19 | 深圳市欢太科技有限公司 | Information prompt method and related product |
CN111090489A (en) * | 2019-12-26 | 2020-05-01 | 维沃移动通信有限公司 | Information control method and electronic equipment |
CN111638846A (en) * | 2020-05-26 | 2020-09-08 | 维沃移动通信有限公司 | Image recognition method and device and electronic equipment |
CN111901896A (en) * | 2020-07-14 | 2020-11-06 | 维沃移动通信有限公司 | Information sharing method, information sharing device, electronic equipment and storage medium |
WO2020238938A1 (en) * | 2019-05-29 | 2020-12-03 | 维沃移动通信有限公司 | Information input method and mobile terminal |
CN112396054A (en) * | 2020-11-30 | 2021-02-23 | 泰康保险集团股份有限公司 | Text extraction method and device, electronic equipment and storage medium |
-
2021
- 2021-03-22 CN CN202110302991.9A patent/CN113194024B/en active Active
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104090648A (en) * | 2014-05-21 | 2014-10-08 | 中兴通讯股份有限公司 | Data entry method and terminal |
CN105589942A (en) * | 2015-12-16 | 2016-05-18 | 北京金山安全软件有限公司 | Picture display method and device and electronic equipment |
US20180109732A1 (en) * | 2016-10-19 | 2018-04-19 | Lg Electronics Inc. | Mobile terminal |
CN108108249A (en) * | 2016-11-25 | 2018-06-01 | 北京嘀嘀无限科技发展有限公司 | Data inputting method and device |
US20190369825A1 (en) * | 2018-06-05 | 2019-12-05 | Samsung Electronics Co., Ltd. | Electronic device and method for providing information related to image to application through input unit |
WO2020051881A1 (en) * | 2018-09-14 | 2020-03-19 | 深圳市欢太科技有限公司 | Information prompt method and related product |
CN109635683A (en) * | 2018-11-27 | 2019-04-16 | 维沃移动通信有限公司 | Method for extracting content and terminal device in a kind of image |
CN109857494A (en) * | 2018-12-24 | 2019-06-07 | 维沃移动通信有限公司 | A kind of message prompt method and terminal device |
WO2020238938A1 (en) * | 2019-05-29 | 2020-12-03 | 维沃移动通信有限公司 | Information input method and mobile terminal |
CN111090489A (en) * | 2019-12-26 | 2020-05-01 | 维沃移动通信有限公司 | Information control method and electronic equipment |
CN111638846A (en) * | 2020-05-26 | 2020-09-08 | 维沃移动通信有限公司 | Image recognition method and device and electronic equipment |
CN111901896A (en) * | 2020-07-14 | 2020-11-06 | 维沃移动通信有限公司 | Information sharing method, information sharing device, electronic equipment and storage medium |
CN112396054A (en) * | 2020-11-30 | 2021-02-23 | 泰康保险集团股份有限公司 | Text extraction method and device, electronic equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113835598A (en) * | 2021-09-03 | 2021-12-24 | 维沃移动通信(杭州)有限公司 | Information acquisition method and device and electronic equipment |
WO2023051384A1 (en) * | 2021-09-29 | 2023-04-06 | 维沃移动通信有限公司 | Display method, information sending method, and electronic device |
WO2023056900A1 (en) * | 2021-10-08 | 2023-04-13 | 北京字跳网络技术有限公司 | Information display method and apparatus, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN113194024B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113194024B (en) | Information display method and device and electronic equipment | |
CN113126838A (en) | Application icon sorting method and device and electronic equipment | |
CN112269798B (en) | Information display method and device and electronic equipment | |
CN113518026A (en) | Message processing method and device and electronic equipment | |
CN113918055A (en) | Message processing method and device and electronic equipment | |
CN112612391A (en) | Message processing method and device and electronic equipment | |
CN104572654A (en) | User searching method and device | |
CN112486444A (en) | Screen projection method, device, equipment and readable storage medium | |
CN112929494B (en) | Information processing method, information processing apparatus, information processing medium, and electronic device | |
CN112333084B (en) | File sending method and device and electronic equipment | |
CN113641886A (en) | Searching method and device and electronic equipment | |
CN112162808A (en) | Interface display method and device and electronic equipment | |
CN112764633B (en) | Information processing method and device and electronic equipment | |
CN113590008A (en) | Chat message display method and device and electronic equipment | |
CN113325978A (en) | Message display method and device and electronic equipment | |
CN112818094A (en) | Chat content processing method and device and electronic equipment | |
CN112995506A (en) | Display control method, display control device, electronic device, and medium | |
CN113325986B (en) | Program control method, program control device, electronic device and readable storage medium | |
CN112183149B (en) | Graphic code processing method and device | |
CN116225289A (en) | Dynamic information management method and device and electronic equipment | |
CN111796733B (en) | Image display method, image display device and electronic equipment | |
CN111796736B (en) | Application sharing method and device and electronic equipment | |
CN113515216A (en) | Application program switching method and device and electronic equipment | |
CN113885743A (en) | Text content selection method and device | |
CN113589983A (en) | Graphic identifier display method and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |