CN111638844A - Screen capturing method and device and electronic equipment - Google Patents
Screen capturing method and device and electronic equipment Download PDFInfo
- Publication number
- CN111638844A CN111638844A CN202010444244.4A CN202010444244A CN111638844A CN 111638844 A CN111638844 A CN 111638844A CN 202010444244 A CN202010444244 A CN 202010444244A CN 111638844 A CN111638844 A CN 111638844A
- Authority
- CN
- China
- Prior art keywords
- input
- interface
- identification
- objects
- screen
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04847—Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application discloses a screen capture method, a screen capture device and electronic equipment, and belongs to the technical field of communication. The problem that the operation process of intercepting a plurality of contents from the interface of the electronic equipment is complicated and time-consuming can be solved. The method comprises the following steps: receiving a first input under the condition that M objects are included in a first interface displayed by the electronic equipment; responding to the first input, and executing N screen capturing operations on a target object in the M objects to obtain N screen capturing images; wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects comprise part or all of the M objects, and M and N are positive integers greater than 1. The method can be applied to a scene of screen capture of the interface of the electronic equipment.
Description
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a screen capturing method and device and electronic equipment.
Background
In the process of using the electronic device by a user, a screenshot image of partial content in an interface displayed by the electronic device is often required to be acquired, rather than a screenshot image of the whole interface. The user needs to capture a screen capture image of a part of the picture content or a part of the text content in the display interface of the electronic device.
At present, when a user needs to acquire a screenshot image of a part of content in a display interface of an electronic device, the electronic device needs to be triggered to enter a screenshot mode first, so as to display a content selection frame in the interface of the electronic device. Then, the user needs to operate the manual frame of the content selection frame to select a part of the content in the interface, and then the electronic device can be triggered to execute screen capture operation on the part of the content to obtain a screen capture image of the part of the content.
Therefore, under the condition that a user needs to acquire a plurality of screen capture images corresponding to a plurality of contents in the same interface displayed by the electronic equipment, the user needs to repeatedly execute the operation of manually selecting the contents for a plurality of times to control the electronic equipment to respectively capture the screen capture images of the plurality of contents in the interface. The operation process of the user to intercept a plurality of contents from one interface of the electronic device is tedious and time-consuming.
Disclosure of Invention
The embodiment of the application aims to provide a screen capturing method, a screen capturing device and electronic equipment, and the problem that the operation process of capturing a plurality of contents from one interface of the electronic equipment by a user is complicated and time-consuming can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a screen capture method, where the method includes: receiving a first input under the condition that M objects are included in a first interface displayed by the electronic equipment; responding to the first input, and executing N screen capturing operations on a target object in the M objects to obtain N screen capturing images; wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects comprise part or all of the M objects, and M and N are positive integers greater than 1.
In a second aspect, an embodiment of the present application provides a screen capture device, including: the device comprises a receiving module and a processing module. The receiving module is used for receiving a first input under the condition that a first interface displayed by the electronic equipment comprises M objects; the processing module is used for responding to the first input received by the receiving module, and executing N times of screen capturing operation on a target object in the M objects to obtain N screen capturing images; wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects comprise part or all of the M objects, and M and N are positive integers greater than 1.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, and the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In this embodiment of the application, in a case that a first interface displayed by an electronic device includes M objects, a first input may be received, and in response to the first input, N screen capturing operations may be performed on a target object of the M objects, resulting in N screen capturing images. Each screenshot image comprises at least one object of the M objects, the display size of each screenshot image is smaller than that of the first interface, the target object comprises part or all of the M objects, and M and N are positive integers larger than 1. It can be understood that the N screen shots are a plurality of screen shots corresponding to a plurality of partial contents in the first interface, respectively. That is to say, the user does not need to repeatedly execute the step of selecting the content in the first interface through the content selection frame for multiple times, but can trigger the electronic device to respectively execute the screen capturing operation on different contents in the first interface through the convenient first input, and obtain the screen capturing images of all the parts of the contents. Therefore, the user operation in the process of intercepting a plurality of contents in one interface is simplified, and the time consumption is reduced.
Drawings
Fig. 1 is a schematic diagram of a screen capture method provided in an embodiment of the present application;
fig. 2 is a schematic view of a screen capture operation of an electronic device according to an embodiment of the present disclosure;
fig. 3 is a second schematic view illustrating a screen capturing operation of an electronic device according to an embodiment of the present disclosure;
fig. 4 is a second schematic diagram of a screen capturing method according to an embodiment of the present application;
fig. 5 is a third schematic diagram of a screen capture method according to an embodiment of the present application;
fig. 6 is a third schematic view illustrating a screen capturing operation of an electronic device according to an embodiment of the present application;
FIG. 7 is a fourth schematic diagram illustrating a screen capture method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a screen capture device according to an embodiment of the present application;
fig. 9 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The screen capture method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In the embodiment of the application, under the condition that a first interface displayed by the electronic device includes M objects, a user can trigger the electronic device to perform N screen capturing operations on a target object of the M objects through a first input, so as to obtain N screen capturing images. Each screen capture image comprises at least one object in the M objects, the display size of each screen capture image is smaller than that of the first interface, and M and N are positive integers larger than 1. The N screen capture images are a plurality of screen capture images corresponding to a plurality of partial contents in the first interface respectively. That is to say, the user does not need to repeatedly execute the step of selecting the content in the first interface through the content selection frame for multiple times, but can trigger the electronic device to respectively execute the screen capturing operation on different parts of content in the first interface through the convenient first input, and obtain the screen capturing images of the parts of content. Therefore, the user operation in the process of intercepting a plurality of contents in one interface is simplified, and the time consumption is reduced.
As shown in fig. 1, the present application embodiment provides a screen capture method, which may include steps 101 and 102 described below.
Step 101, receiving a first input under the condition that a first interface displayed by the electronic equipment comprises M objects.
Optionally, in this embodiment of the application, the first interface may be any one of the following: a desktop interface of the electronic device, an application interface of the electronic device, or an interface that simultaneously displays the desktop interface and at least one application interface (e.g., a widget interface that displays an application on the desktop interface). The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the M objects are determined according to content types of the content in the first interface.
Wherein the content types include at least one of: text, pictures, video.
It should be noted that, in this embodiment of the application, the M objects may be combinations of different content types, and the corresponding first interface may be a display interface that displays the combinations of the different content types. Specifically, the first interface may be an interface for displaying content including text, pictures, videos, and the like, and a combination thereof (i.e., including M objects). In addition, one object of the M objects may be a partial content or a sub-content in the first interface.
Optionally, in this embodiment of the present application, one way of determining operations of the M objects from the first interface may be any of the following: in the first mode, the action trigger of the electronic equipment for executing the screen capture operation is triggered by the first input. Namely, when the electronic equipment receives the first input and is triggered to execute screen capture operation, the electronic equipment is triggered to identify the first interface, and then M objects are determined. In the second mode, the triggering is performed before the first input is executed. That is, when the first interface of the electronic device is displayed before the first input, M objects are determined from the content displayed on the first interface. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the electronic device may classify the content according to the content type of the first interface display content, and determine the object of the classified content by using a recognition algorithm corresponding to the classification. Specifically, the electronic device may classify the content displayed on the first interface according to a picture type and a text type (where, the video type is classified according to the picture type in order to execute an image captured at the time of the classification action), and then determine the object by using a first preset algorithm for the picture type and a second preset algorithm for the text type. The first preset algorithm may be at least one of the following algorithms: image recognition algorithms, face recognition algorithms, foreground recognition algorithms, and the like. The second preset algorithm may be a text recognition algorithm, a text typesetting mode recognition algorithm, and the like. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the first input is used to trigger the electronic device to start a screen capture operation according to a preset mode. Specifically, the first input may be a pressing input of a physical key of the electronic device, for example, a long pressing input of a volume key of the electronic device by a user, and the long pressing input may start the electronic device to perform a screen capture operation. The first input may also be a touch input to the first interface or a first interface display screen capture control, where the screen capture control may trigger the electronic device to perform a screen capture operation, and the touch input mode may be any of the following: single click input, double click input, long press input, etc. The first input may also be a voice input to the electronic device, etc. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited. It can be understood that the first input is a simple input form convenient for the user to operate, and compared with the related art, the user does not need to perform multiple manual frame selection inputs.
And step 102, the electronic equipment responds to the first input, and carries out screen capturing operation on a target object in the M objects for N times to obtain N screen capturing images.
Each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects include part or all of the M objects, and M and N are positive integers greater than 1.
It should be noted that, in the embodiment of the present application, each of the N screen shots is a screen shot that includes at least one object of the M objects, and at most includes (M-1) objects of the M objects. In addition, the embodiment of the present application does not specifically limit the case of capturing all M objects from the first interface, and reference may be made to the case of performing a whole screen capture on the first interface. Illustratively, assuming that a picture includes 5 head portraits (i.e., M is 5), the user may trigger the electronic device to capture a screenshot including the 1 st head portrait, capture a screenshot including the 2 nd head portrait, capture a screenshot including the 3 rd head portrait, and capture a screenshot including the 4 th head portrait by using the first input.
In addition, the size relationship between M and N is not particularly limited in the embodiments of the present application. Illustratively, M may be greater than N, M may be equal to N, and M may be less than N. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Specifically, the target object may be any one of the following objects: all M objects, each of the M objects, any combination of at least two of the M objects, and the like. The determination may be specifically performed according to actual use requirements, and the present application is not particularly limited.
Optionally, in this embodiment of the application, after the electronic device captures the screen to obtain N screen capture images, the electronic device may store the obtained N screen capture images in the electronic device, or may share the captured N screen capture images (for example, share the captured N screen capture images with a friend) through a corresponding application program, where after the sharing is completed, the captured N screen capture images may be stored in the electronic device or may be directly deleted. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Illustratively, fig. 2 is one of screen capturing operation diagrams of the electronic device. As shown in fig. 2 (a), 4 sections of text are displayed on the interface 001 of the electronic device 00 (that is, the object type of the first interface including four objects is a text type), specifically: "XX city weather forecast" and details, "XX city traffic information" and details. The user can press the interface 001 for a long time, and the electronic device 00 can respond to the long press input (i.e., the first input), as shown in fig. 2 (b), the area 002, the area 003, the area 004 and the area 005 to be screenshot is displayed on the interface 001 in a frame selection mode, and the screenshot operation is performed on the area 002, the area 003, the area 004 and the area 005, so that 4 screenshot images are obtained and stored in the electronic device 00.
Fig. 3 is a second exemplary screen capture operation diagram of the electronic device. As shown in fig. 3 (a), 3 animal patterns are displayed on the interface 006 of the electronic device 00 (i.e., the object type of the first interface including 3 objects is a picture type). The user can long-press the interface 006, and the electronic device 00 can respond to the long-press input (i.e., the first input), as shown in fig. 3 (b), the region to be screen-shot 007, the region 008, and the region 009 (i.e., the regions respectively including the above-described 3 animal patterns) are displayed on the interface 006 in a frame, and the screen-shot operation is performed on the region 007, the region 008, and the region 009, to obtain 3 screen-shot images, which are stored in the electronic device 00.
The embodiment of the application provides a screen capture method, which can receive a first input under the condition that a first interface displayed by electronic equipment comprises M objects, and respond to the first input to execute N times of screen capture operations on a target object in the M objects to obtain N screen capture images. Each screenshot image comprises at least one object of the M objects, the display size of each screenshot image is smaller than that of the first interface, the target object comprises part or all of the M objects, and M and N are positive integers larger than 1. It can be understood that the N screen shots are a plurality of screen shots corresponding to a plurality of partial contents in the first interface, respectively. That is to say, the user does not need to repeatedly execute the step of selecting the content in the first interface through the content selection frame for multiple times, but can trigger the electronic device to respectively execute the screen capturing operation on different contents in the first interface through the convenient first input, and obtain the screen capturing images of all the parts of the contents. Therefore, the user operation in the process of intercepting a plurality of contents in one interface is simplified, and the time consumption is reduced.
Optionally, with reference to fig. 1, as shown in fig. 4, the step 102 "performing N screen shots on a target object of the M objects in response to a first input" may be specifically implemented by the following step 102 a.
Step 102a, the electronic device determines the first interface as N regions in response to the first input, and performs a screen capture operation on each of the N regions in the first interface.
Wherein each of the regions contains at least one of the M objects.
Optionally, in this embodiment of the application, a method for determining, by the electronic device, the first interface including the M objects as the N regions may be any of the following methods: according to the method 1, an electronic device can identify M objects, position information (for example, coordinate information, specifically including barycentric coordinates and edge point coordinates) of a first interface where each object in the M objects is located is sent to a server, the server determines position information of N areas (for example, coordinate information of each area in the N areas, specifically including barycentric coordinates of the areas and edge point coordinates of the areas) according to a preset algorithm through the position information, sends the position information of the N areas to the electronic device, and the electronic device determines the N areas according to the received position information of the N areas. And 2, the electronic equipment directly determines the position information of each area in the N areas according to the position information of the first interface where each object in the M objects is located and a preset algorithm. Wherein, the preset algorithm may be at least one of the following: an image recognition boundary algorithm, an image recognition region growing algorithm, a foreground recognition algorithm and the like. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the shape of each of the N regions is not specifically limited in this application, specifically, the shape of each region may be a rectangle, a circle, an ellipse, or the like, and the shape of each region may also be an irregular shape that envelopes at least one object of the M objects along an edge contour of the at least one object. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited. It should be noted that the following embodiments exemplify a rectangular region shape, where the rectangle is a rectangle having a smallest area and including at least one object among M objects.
It should be noted that, in the embodiment of the present application, the electronic device performs a screen capturing operation on each of the N regions in the first interface respectively, and then performs N screen capturing operations to obtain N screen capturing images.
It can be understood that, in the embodiment of the application, in response to the first input, the electronic device may determine the first interface as N regions, and perform, for each of the N regions in the first interface, a screen capture operation separately, and then perform N screen capture operations to obtain N screen capture images. Therefore, the electronic equipment can intercept the N screen-shot images only by receiving one-time operation of the user, so that the user can conveniently intercept a plurality of images from the interface of the electronic equipment, and the time of the user is saved.
Optionally, with reference to fig. 4, as shown in fig. 5, the first input includes a first sub-input and a second sub-input, and the step 102a may be implemented by the following step 102a1 and step 102a 2.
In the case that N identification boxes are displayed on the first interface, step 102a1, the electronic device adjusts the display size of the area indicated by the target identification box in response to the first sub-input to the target identification box of the N identification boxes.
Wherein, each identification box is used for indicating an area in the first interface.
Optionally, in this embodiment of the application, the first sub-input is used to trigger the electronic device to adjust the display size of the target identification frame, and the second sub-input is used to trigger the electronic device to respectively perform a screen capture operation on each of the N regions.
Optionally, in this embodiment of the application, the first sub-input may specifically be a dragging input of the edge of the target identification box by the user, where the dragging input is used to adjust the display size of the target identification box. The first input may also be an input for a user to manually input a size of the target identification box, for example, in an editing interface for displaying the target identification box on the electronic device, the user may manually input a display size of the target identification box. The first sub-input may also be a first voice input for instructing the electronic device to adjust the display size of the target identification box. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, in the embodiment of the present application, the identification box is used for prompting the user of the object selected by the electronic device. The target identification box is used for indicating target objects selected by the user electronic equipment, wherein the target objects are at least one of the objects selected by the electronic equipment.
Optionally, in this embodiment of the application, the user may freely adjust the display size of the target identification frame through the first sub-input, and also may combine the target identification frame and other identification frames while adjusting the display size of the target identification frame, that is, the target identification frame includes other identification frames, and at this time, the adjusted target identification frame includes at least one other object in addition to the display target object.
Step 102a2, the electronic device responds to the second sub-input and respectively executes screen capture operation on each of the N areas.
Optionally, in this embodiment of the application, when the preset control is displayed on the first interface of the electronic device, the second sub-input may specifically be a touch input to the preset control, where the preset control is used to determine that a screen capture operation is performed on each of the N regions, and the touch input may be any of the following: click input to the preset control, double click input to the preset control, drag input to the preset control according to a preset trajectory, and the like. The second sub-input may also be a preset gesture input on the first interface by the user, where the preset gesture is used to trigger the electronic device to perform a screen capture operation on each of the N regions. The second sub-input may also be a second voice input to the electronic device instructing the electronic device to perform a screen capture operation on each of the N regions. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, in this embodiment of the application, after the first sub-input adjusts the display size of the target logo frame displayed on the target area, the electronic device may intercept, in response to the second sub-input, the image of each of the N adjusted areas according to the final adjustment result (i.e., the display size of the target logo frame after adjustment).
For example, fig. 6 is a third schematic diagram of the screen capture operation of the electronic device. As shown in fig. 6 (a), three identification frames are displayed on the interface 006 of the electronic device 00, and the areas corresponding to the three identification frames are: region 007, region 008, region 009. The user may drag an edge of the identification box (i.e., the object identification box) of the region 009 in the direction of F1, and the electronic device 00 displays the adjusted region 010 (i.e., the object identification box is resized to merge the region 008 and the region 009) and the original region 007 in response to the drag input (i.e., the first sub-input) as shown in (b) of fig. 6. At this time, the electronic device 00 can receive the voice input (i.e., the second sub-input) of the user, and the screen capturing operation will be performed on the region 007 and the region 010, respectively.
It can be understood that, in the embodiment of the application, a user may adjust the display size of the target identification frame through the first sub-input, so that the user may flexibly adjust the size of the identification frame according to actual use requirements, and further adjust the content of the images in the N regions to be captured, and may respond to the second sub-input, and perform a screen capture operation on each of the adjusted N regions respectively. Therefore, the user can adjust the identification frame according to the use requirement of the user, multiple images required by the user are intercepted, and the user is not required to execute multiple manual frame selection operations, so that the man-machine interaction performance is improved, and the use experience of the user is improved.
Optionally, with reference to fig. 4, as shown in fig. 7, the first input includes a third sub-input and a fourth sub-input, and the step 102a may be implemented by the following steps 102a3 and 102a 4.
In step 102a3, in the case that P identification boxes are displayed on the first interface, the electronic device responds to the third sub-input, and updates and displays the P identification boxes into N identification boxes.
Wherein, each identification box is used for indicating an area in the first interface; the first region comprises at least one second region; the first area is an area indicated by one of the N identification boxes, the second area is an area indicated by one of the P identification boxes, and P is a positive integer greater than or equal to N.
Optionally, in this embodiment of the application, the third input is used to trigger the electronic device to update and display the identification frame, for example, to update and display P identification frames into N identification frames. The fourth sub-input is used for triggering the electronic equipment to respectively execute screen capture operation on each of the N areas.
Optionally, in this embodiment of the application, before the P identification frames are displayed on the first interface of the electronic device, the user may trigger the electronic device to display the P identification frames on the first interface by inputting the first target of the electronic device. The first target input may be a touch input to the first interface, or a press input to a physical key of the electronic device. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the electronic device may determine the N identification boxes according to the P identification boxes. Specifically, the mode of determining the N identification frames by the electronic device may be any of the following: in the mode 1, the electronic device may display P primary identification frames including only one object, and then the electronic device may merge any two primary identification frames satisfying a preset condition among the P primary identification frames into one secondary identification frame, and if Q secondary identification frames are obtained by merging (where Q is a positive integer smaller than P/2), the N identification frames include Q secondary identification frames and (P-2Q) primary identification frames that are not merged. The second-level identification frame comprises objects in two previous first-level identification frames, and the preset condition is that the distance between the centers of gravity of any two first-level identification frames is smaller than a preset value. In a mode 2, similar to the mode 1, the electronic device may merge to obtain a third-level identification frame, a fourth-level identification frame, and the like, and at this time, the N identification frames include the second-level identification frame, the third-level identification frame, the fourth-level identification frame, and the first-level identification frame that does not participate in the merging, and the like. Mode 3, the electronic device may determine the N identification boxes according to the user input. For example, the user may select L identification boxes from P primary identification boxes (where L is a positive integer less than P) for merging. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, in the embodiment of the present application, the number of levels of the identification frame is determined according to the number of objects included in the identification frame, for example, one of the first-level identification frame includes only one object, and the second-level identification frame includes two objects.
Optionally, in this embodiment of the application, a specific manner of combining the two primary identification frames to obtain one secondary identification frame may be that coordinates of four boundary points of each primary identification frame are obtained (taking the identification frame as a rectangle for exemplary illustration), then the coordinates of the four boundary points of each identification frame are compared, limit boundary points in four directions are respectively determined (the limit boundary point in a certain direction refers to a farthest point in the direction among all the obtained coordinate points, for example, the limit boundary point in the X-axis direction may be a farthest point in the X-axis direction), and then a rectangular identification frame, that is, a secondary identification frame, is determined according to the limit boundary points in the four directions. The steps for determining the third-level identification frame and the other identification frames are the same as the steps for determining the second-level identification frame, and are not repeated here.
In step 102a4, the electronic device responds to the fourth sub-input, and performs screen capture operation on each of the N areas respectively.
Optionally, in this embodiment of the application, the electronic device may merge part of the previously displayed P identification frames, and display the merged N identification frames.
It should be noted that, in the embodiment of the present application, the description related to the fourth sub-input may specifically refer to the description related to the second sub-input in step 102a2, and is not repeated herein.
Optionally, in this embodiment of the present application, the at least one first edge position coincides with the at least one second edge position.
The at least one first edge position is an edge position of the first area, and the at least one second edge position is an edge position of the at least one second area.
Optionally, in this embodiment of the application, the edge position of the first area or the edge position of the second area may be represented by a coordinate point of the position, for example, the connection boundary point in the step 102a 3.
It should be noted that, in the embodiment of the present application, when some of the P identification frames are merged, at least one first edge position may coincide with at least one second edge position.
In addition, in this embodiment of the application, the electronic device may update and display the P logo boxes as the N logo boxes according to the edge position of the second area indicated by each logo box in the P logo boxes. The specific determination method may refer to the related description in step 102a3, and is not described herein again.
Illustratively, as shown in fig. 6 (a), three identification frames are displayed on the interface 006 of the electronic device 00, and the areas corresponding to the three identification frames are respectively: region 007, region 008, region 009 (i.e., the three second regions). The user can press a volume key of the electronic device 00, and in response to the press input (i.e., the third sub-input) by the electronic device 00, as shown in fig. 6 (b), the region 010 (i.e., the first region including the original region 008 and the original region 009) and the region 007 (i.e., the second region) are displayed on the interface 006. At this time, the electronic device 00 can receive the voice input (i.e., the fourth sub-input) of the user, and the screen capturing operation will be performed on the region 007 and the region 010, respectively.
It can be understood that, in the embodiment of the present application, the electronic device may merge part of the previously displayed P logo boxes, and display the merged N logo boxes and the areas indicated by the logo boxes. Therefore, the electronic equipment can combine part of the original display identification frames according to the input of the user, so that the user does not need to select the area for combination, and the use by the user is facilitated.
It should be noted that, in the screen capture method provided in the embodiment of the present application, the execution subject may be a screen capture device, or a control module in the screen capture device for executing the method for loading the screen capture. In the embodiment of the present application, a method for executing a loading screen capture by a screen capture device is taken as an example, and the screen capture method provided in the embodiment of the present application is described.
As shown in fig. 8, an embodiment of the present application provides a screen capture device 800. The screen capture apparatus 800 may include a receiving module 801 and a processing module 802. The receiving module 801 may be configured to receive a first input when M objects are included in a first interface displayed by the electronic device. The processing module 802 may be configured to perform, in response to the first input received by the receiving module 801, N screen capturing operations on a target object of the M objects, resulting in N screen capturing images. Wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target object includes some or all of the M objects, M and N both being positive integers greater than 1.
Optionally, in this embodiment of the application, the processing module 802 may be specifically configured to determine, in response to the first input received by the receiving module 801, the first interface as N regions, and perform a screen capture operation on each of the N regions in the first interface. Wherein each region contains at least one object of the M objects.
Optionally, in this embodiment of the application, the first input includes a first sub-input and a second sub-input. The processing module 802 is further specifically configured to, in a case that N identification boxes are displayed on the first interface, in response to the first sub-input to a target identification box in the N identification boxes received by the receiving module 801, adjust a display size of an area indicated by the target identification box, where each identification box is used to indicate an area in the first interface. The processing module 802 is further specifically configured to perform a screen capture operation on each of the N regions in response to the second sub-input received by the receiving module 801.
Optionally, in this embodiment of the application, the first input includes a third sub-input and a fourth sub-input. The processing module 802 is further specifically configured to, in a case that P identification frames are displayed on the first interface, respond to the third sub-input received by the receiving module 801, update and display the P identification frames into N identification frames. The processing module 802 is further specifically configured to perform a screen capture operation on each of the N regions in response to the fourth sub-input received by the receiving module 801. Wherein each identification box is used for indicating an area in the first interface; the first region comprises at least one second region; the first area is an area indicated by one of the N flags, the second area is an area indicated by one of the P flags, and P is a positive integer greater than or equal to N.
Optionally, in this embodiment of the present application, the at least one first edge position coincides with the at least one second edge position. The at least one first edge position is an edge position of the first area, and the at least one second edge position is an edge position of the at least one second area.
Optionally, in this embodiment of the application, the M objects are determined according to content types of the content in the first interface. Wherein the content type includes at least one of: text, pictures, video.
The screen capture device in the embodiment of the present application may be a device, and may also be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The screen capture device in the embodiment of the application can be a device with an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The screen capture device provided in the embodiment of the present application can implement each process implemented by the screen capture device in the method embodiments of fig. 1 to 7, and is not described here again to avoid repetition.
The embodiment of the application provides a screen capture device. In the case that M objects are included in a first interface displayed by the electronic device, a first input may be received, and in response to the first input, N screen capturing operations may be performed on a target object of the M objects, resulting in N screen capturing images. Each screenshot image comprises at least one object of the M objects, the display size of each screenshot image is smaller than that of the first interface, the target object comprises part or all of the M objects, and M and N are positive integers larger than 1. It can be understood that the N screen shots are a plurality of screen shots corresponding to a plurality of partial contents in the first interface, respectively. That is to say, the user does not need to repeatedly execute the step of selecting the content in the first interface through the content selection frame for multiple times, but can trigger the electronic device to respectively execute the screen capturing operation on different contents in the first interface through the convenient first input, and obtain the screen capturing images of all the parts of the contents. Therefore, the user operation in the process of intercepting a plurality of contents in one interface is simplified, and the time consumption is reduced.
Optionally, an electronic device 1000 is further provided in this embodiment of the present application, and includes a processor 1010, a memory 1009, and a program or an instruction stored in the memory 1009 and capable of running on the processor 1010, where the program or the instruction is executed by the processor 1010 to implement each process of the foregoing screenshot method embodiment, and can achieve the same technical effect, and details are not described here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 9 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Among them, the input unit 1004 may include a graphic processor 10041 and a microphone 10042, the display unit 1006 may include a display panel 10061, the user input unit 1007 may include a touch panel 10071 and other input devices 10072, and the memory 1009 may be used to store software programs (e.g., an operating system, an application program required for at least one function), and various data.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 9 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The user input unit 1007 may be configured to receive a first input when M objects are included in a first interface displayed by the electronic device. The processor 1010 may be configured to perform, in response to the first input received by the user input unit 1007, N screen capturing operations on a target object of the M objects, resulting in N screen capturing images. Wherein each screen capture image contains at least one object of the M objects, and the display size of each screen capture image is smaller than that of the first interface; m and N are both positive integers greater than 1.
The application provides an electronic device, which can receive a first input under the condition that a first interface displayed by the electronic device comprises M objects, and respond to the first input to execute N screen capturing operations on a target object in the M objects to obtain N screen capturing images. Each screenshot image comprises at least one object of the M objects, the display size of each screenshot image is smaller than that of the first interface, the target object comprises part or all of the M objects, and M and N are positive integers larger than 1. It can be understood that the N screen shots are a plurality of screen shots corresponding to a plurality of partial contents in the first interface, respectively. That is to say, the user does not need to repeatedly execute the step of selecting the content in the first interface through the content selection frame for multiple times, but can trigger the electronic device to respectively execute the screen capturing operation on different contents in the first interface through the convenient first input, and obtain the screen capturing images of all the parts of the contents. Therefore, the user operation in the process of intercepting a plurality of contents in one interface is simplified, and the time consumption is reduced.
Optionally, in this embodiment of the application, the processor 1010 may be specifically configured to perform, in response to the first input received by the user input unit 1007, a screen capture operation on each of the N regions in the first interface respectively. Wherein each region contains at least one object of the M objects.
It can be understood that, in the embodiment of the application, in response to the first input, the electronic device may determine the first interface as N regions, and perform, for each of the N regions in the first interface, a screen capture operation separately, and then perform N screen capture operations to obtain N screen capture images. Therefore, the electronic equipment can intercept the N screen-shot images only by receiving one-time operation of the user, so that the user can conveniently intercept a plurality of images from the interface of the electronic equipment, and the time of the user is saved.
Optionally, in this embodiment of the application, the first input includes a first sub-input and a second sub-input. The processor 1010 is further specifically configured to, in a case that N identification boxes are displayed on the first interface, adjust a display size of an area indicated by a target identification box in the N identification boxes in response to the first sub-input to the target identification box received by the user input unit 1007, where each identification box is used to indicate an area in the first interface. The processor 1010 is further specifically configured to perform a screen capture operation on each of the N regions in response to the second sub-input received by the user input unit 1007.
It can be understood that, in the embodiment of the application, a user may adjust the display size of the target identification frame through the first sub-input, so that the user may flexibly adjust the size of the identification frame according to actual use requirements, and further adjust the content of the images in the N regions to be captured, and may respond to the second sub-input, and perform a screen capture operation on each of the adjusted N regions respectively. Therefore, the user can adjust the identification frame according to the use requirement of the user, multiple images required by the user are intercepted, and the user is not required to execute multiple manual frame selection operations, so that the man-machine interaction performance is improved, and the use experience of the user is improved.
Optionally, in this embodiment of the application, the first input includes a third sub-input and a fourth sub-input. The processor 1010 is further specifically configured to, in a case that P identification boxes are displayed on the first interface, respond to the third sub-input received by the user input unit 1007, update and display the P identification boxes into N identification boxes. The processor 1010 is further specifically configured to perform a screen capture operation on each of the N regions in response to the fourth sub-input received by the user input unit 1007. Wherein each identification box is used for indicating an area in the first interface; the first region comprises at least one second region; the first area is an area indicated by one of the N id boxes, and the second area is an area indicated by one of the P id boxes.
It can be understood that, in the embodiment of the present application, the electronic device may merge part of the previously displayed P logo boxes, and display the merged N logo boxes and the areas indicated by the logo boxes. Therefore, the electronic equipment can combine part of the original display identification frames according to the input of the user, so that the user does not need to select the area for combination, and the use by the user is facilitated.
Optionally, in this embodiment of the present application, the at least one first edge position coincides with the at least one second edge position. The at least one first edge position is an edge position of the first area, and the at least one second edge position is an edge position of the at least one second area.
Optionally, in this embodiment of the application, the M objects are determined according to content types of the content in the first interface. Wherein the content type includes at least one of: text, pictures, video.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing screenshot method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the screen capture method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (12)
1. A method of screen capture, the method comprising:
receiving a first input under the condition that M objects are included in a first interface displayed by the electronic equipment;
responding to the first input, and executing N screen capturing operations on a target object in the M objects to obtain N screen capturing images;
wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects comprise part or all of the M objects, and M and N are positive integers greater than 1.
2. The method of claim 1, wherein performing N screen shots of a target object of the M objects in response to the first input comprises:
in response to the first input, determining the first interface as N areas, and respectively performing screen capture operation on each of the N areas;
wherein each region contains at least one object of the M objects.
3. The method of claim 2, wherein the first input comprises a first sub-input and a second sub-input;
the responding to the first input, determining the first interface as N areas, and respectively executing screen capture operation on each of the N areas, wherein the screen capture operation comprises:
under the condition that N identification frames are displayed on the first interface, responding to the first sub-input of a target identification frame in the N identification frames, and adjusting the display size of an area indicated by the target identification frame, wherein each identification frame is used for indicating one area in the first interface;
and responding to the second sub-input, and respectively executing screen capture operation on each of the N areas.
4. The method of claim 2, wherein the first input comprises a third sub-input and a fourth sub-input;
the responding to the first input, determining the first interface as N areas, and respectively executing screen capture operation on each of the N areas, wherein the screen capture operation comprises:
under the condition that P identification frames are displayed on the first interface, responding to the third sub-input, and updating and displaying the P identification frames into N identification frames;
in response to the fourth sub-input, performing a screen capture operation on each of the N regions respectively;
wherein each identification box is used for indicating one area in the first interface; the first region comprises at least one second region; the first area is an area indicated by one of the N identification boxes, the second area is an area indicated by one of the P identification boxes, and P is a positive integer greater than or equal to N.
5. The method of claim 4, wherein at least one first edge position coincides with at least one second edge position;
wherein the at least one first edge position is an edge position of the first region, and the at least one second edge position is an edge position of the at least one second region.
6. A screen capture device, the device comprising: the device comprises a receiving module and a processing module;
the receiving module is used for receiving a first input under the condition that a first interface displayed by the electronic equipment comprises M objects;
the processing module is used for responding to the first input received by the receiving module, and executing screen capturing operation on a target object in the M objects for N times to obtain N screen capturing images;
wherein each of the N screen shots contains at least one of the M objects, and a display size of each screen shot is smaller than a display size of the first interface; the target objects comprise part or all of the M objects, and M and N are positive integers greater than 1.
7. The apparatus according to claim 6, wherein the processing module is specifically configured to determine the first interface as N regions in response to the first input received by the receiving module, and perform a screen capture operation on each of the N regions in the first interface;
wherein each region contains at least one object of the M objects.
8. The apparatus of claim 7, wherein the first input comprises a first sub-input and a second sub-input;
the processing module is further specifically configured to, in a case that N identification boxes are displayed on the first interface, adjust a display size of an area indicated by a target identification box in the N identification boxes in response to the first sub-input to the target identification box received by the receiving module, where each identification box is used to indicate an area in the first interface;
the processing module is specifically further configured to perform a screen capture operation on each of the N regions in response to the second sub-input received by the receiving module.
9. The apparatus of claim 7, wherein the first input comprises a third sub-input and a fourth sub-input;
the processing module is further specifically configured to, in a case that P identification frames are displayed on the first interface, update and display the P identification frames into N identification frames in response to the third sub-input received by the receiving module;
the processing module is specifically further configured to perform, in response to the fourth sub-input received by the receiving module, a screen capture operation on each of the N regions respectively;
wherein each identification box is used for indicating one area in the first interface; the first region comprises at least one second region; the first area is an area indicated by one of the N identification boxes, the second area is an area indicated by one of the P identification boxes, and P is a positive integer greater than or equal to N.
10. The device of claim 9, wherein at least one first edge position coincides with at least one second edge position;
wherein the at least one first edge position is an edge position of the first region, and the at least one second edge position is an edge position of the at least one second region.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the screen capturing method as claimed in claims 1-5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the screen capturing method according to claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010444244.4A CN111638844A (en) | 2020-05-22 | 2020-05-22 | Screen capturing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010444244.4A CN111638844A (en) | 2020-05-22 | 2020-05-22 | Screen capturing method and device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111638844A true CN111638844A (en) | 2020-09-08 |
Family
ID=72329389
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010444244.4A Pending CN111638844A (en) | 2020-05-22 | 2020-05-22 | Screen capturing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111638844A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112181252A (en) * | 2020-09-29 | 2021-01-05 | 维沃移动通信(杭州)有限公司 | Screen capturing method and device and electronic equipment |
CN112269476A (en) * | 2020-10-28 | 2021-01-26 | 维沃移动通信有限公司 | Formula display method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106502533A (en) * | 2016-10-21 | 2017-03-15 | 上海与德信息技术有限公司 | A kind of screenshotss method and device |
CN107678644A (en) * | 2017-09-18 | 2018-02-09 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
US10147197B2 (en) * | 2016-05-14 | 2018-12-04 | Google Llc | Segment content displayed on a computing device into regions based on pixels of a screenshot image that captures the content |
CN109460177A (en) * | 2018-09-27 | 2019-03-12 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110502293A (en) * | 2019-07-10 | 2019-11-26 | 维沃移动通信有限公司 | A kind of screenshotss method and terminal device |
-
2020
- 2020-05-22 CN CN202010444244.4A patent/CN111638844A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10147197B2 (en) * | 2016-05-14 | 2018-12-04 | Google Llc | Segment content displayed on a computing device into regions based on pixels of a screenshot image that captures the content |
CN106502533A (en) * | 2016-10-21 | 2017-03-15 | 上海与德信息技术有限公司 | A kind of screenshotss method and device |
CN107678644A (en) * | 2017-09-18 | 2018-02-09 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109460177A (en) * | 2018-09-27 | 2019-03-12 | 维沃移动通信有限公司 | A kind of image processing method and terminal device |
CN110502293A (en) * | 2019-07-10 | 2019-11-26 | 维沃移动通信有限公司 | A kind of screenshotss method and terminal device |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112181252A (en) * | 2020-09-29 | 2021-01-05 | 维沃移动通信(杭州)有限公司 | Screen capturing method and device and electronic equipment |
WO2022068721A1 (en) * | 2020-09-29 | 2022-04-07 | 维沃移动通信有限公司 | Screen capture method and apparatus, and electronic device |
US12135864B2 (en) | 2020-09-29 | 2024-11-05 | Vivo Mobile Communication Co., Ltd. | Screen capture method and apparatus, and electronic device |
CN112269476A (en) * | 2020-10-28 | 2021-01-26 | 维沃移动通信有限公司 | Formula display method and device and electronic equipment |
CN112269476B (en) * | 2020-10-28 | 2024-05-31 | 维沃移动通信有限公司 | Formula display method and device and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP4207737A1 (en) | Video shooting method, video shooting apparatus, and electronic device | |
CN112738402B (en) | Shooting method, shooting device, electronic equipment and medium | |
CN111612873A (en) | GIF picture generation method and device and electronic equipment | |
CN113126862B (en) | Screen capture method and device, electronic equipment and readable storage medium | |
CN104754223A (en) | Method for generating thumbnail and shooting terminal | |
CN111190677A (en) | Information display method, information display device and terminal equipment | |
CN112269522A (en) | Image processing method, image processing device, electronic equipment and readable storage medium | |
CN111638849A (en) | Screenshot method and device and electronic equipment | |
CN112269519B (en) | Document processing method and device and electronic equipment | |
CN115357158A (en) | Message processing method and device, electronic equipment and storage medium | |
CN112099714B (en) | Screenshot method and device, electronic equipment and readable storage medium | |
CN111638844A (en) | Screen capturing method and device and electronic equipment | |
CN112911147A (en) | Display control method, display control device and electronic equipment | |
CN112684963A (en) | Screenshot method and device and electronic equipment | |
CN115454365A (en) | Picture processing method and device, electronic equipment and medium | |
CN111857465B (en) | Application icon sorting method and device and electronic equipment | |
CN112783406A (en) | Operation execution method and device and electronic equipment | |
CN112383708A (en) | Shooting method, shooting device, electronic equipment and readable storage medium | |
CN111638839A (en) | Screen capturing method and device and electronic equipment | |
CN112286430B (en) | Image processing method, apparatus, device and medium | |
CN112702524B (en) | Image generation method and device and electronic equipment | |
CN111966259B (en) | Screenshot method and device and electronic equipment | |
CN114518859A (en) | Display control method, display control device, electronic equipment and storage medium | |
CN114564921A (en) | Document editing method and device | |
CN113885981A (en) | Desktop editing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200908 |
|
RJ01 | Rejection of invention patent application after publication |