Nothing Special   »   [go: up one dir, main page]

CN113568551A - Picture saving method and device - Google Patents

Picture saving method and device Download PDF

Info

Publication number
CN113568551A
CN113568551A CN202110843906.XA CN202110843906A CN113568551A CN 113568551 A CN113568551 A CN 113568551A CN 202110843906 A CN202110843906 A CN 202110843906A CN 113568551 A CN113568551 A CN 113568551A
Authority
CN
China
Prior art keywords
video
target
screenshot
picture
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110843906.XA
Other languages
Chinese (zh)
Inventor
蒋羽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110843906.XA priority Critical patent/CN113568551A/en
Publication of CN113568551A publication Critical patent/CN113568551A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a picture saving method and device; the method is applied to the client and comprises the following steps: playing the target video; generating a target picture based on at least one frame of image of the target video; and adding the target picture to an expression library for providing alternative expressions for the social speech. By applying the scheme, the user can quickly find the picture intercepted from the video from the alternative expressions in the expression library during social speech, so that the video screenshot sharing efficiency under the social speech scene is remarkably improved, and the user experience is improved.

Description

Picture saving method and device
Technical Field
The present disclosure relates to the field of computer applications, and in particular, to a method and an apparatus for storing pictures.
Background
In the related technology, video playing software can preset screenshot and sharing functions, after user screenshot is completed, the software can display a sharing control, so that the user can share the screenshot in specific social software through one key of the sharing control, and therefore the efficiency of sharing the screenshot in the video playing software is remarkably improved.
However, the above scheme cannot improve the efficiency of sharing the screenshot in other scenarios; for example, if a user tries to share a screenshot generated in advance by the video playing software in a social comment scenario, the user needs to open a screenshot directory of the video playing software in a system album, retrieve a previously generated screenshot file, and insert the screenshot file into the social comment; the above process is inefficient, resulting in poor user experience.
Disclosure of Invention
In view of this, the present disclosure provides a picture saving method and device, so as to at least solve the technical problem of low video screenshot sharing efficiency in a social speaking scenario in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for saving a picture is provided, which is applied to a client, and includes:
playing the target video;
generating a target picture based on at least one frame of image of the target video;
and adding the target picture to an expression library for providing alternative expressions for the social speech.
Optionally, the target picture is a target dynamic picture including multiple frames of images.
Optionally, the generating a target picture based on at least one frame of image of the target video includes:
receiving a screenshot instruction; the screenshot instruction carries progress information indicating the progress of the clip to be stored in the target video;
and generating a target dynamic picture based on at least two frames of images in the segment to be stored corresponding to the progress information.
Optionally, the receiving a screenshot instruction includes:
responding to a triggering operation of a preset screenshot function control, and displaying a starting point mark and an end point mark in a corresponding area of a progress indication control of the target video; the positions of the starting point mark and the end point mark indicate the starting point and the end point of the segment to be stored in the target video; adjusting the position of the start point marker and/or the end point marker in response to a position adjustment operation on the start point marker and/or the end point marker;
and generating a screenshot instruction, wherein the screenshot instruction carries the video progress corresponding to the latest positions of the starting point mark and the end point mark.
Optionally, the receiving a screenshot instruction includes:
in the playing process of the target video, receiving a screenshot starting instruction and a screenshot stopping instruction in sequence; the screenshot starting instruction and the screenshot stopping instruction are respectively used for indicating a starting point and an end point of the to-be-stored clip in the target video;
and generating a screenshot instruction, wherein the screenshot instruction carries the video progress corresponding to the screenshot starting instruction and the video progress corresponding to the screenshot stopping instruction.
Optionally, the generating a target picture based on at least one frame of image of the target video includes:
in the target video, determining at least one segment to be stored containing preset high-value content based on a content identification algorithm;
generating alternative dynamic pictures corresponding to the fragments to be stored respectively based on at least two frames of images in the fragments to be stored;
and in response to the selection operation, determining at least one alternative dynamic picture as the target dynamic picture.
Optionally, the expression library includes any one or more of the following combinations:
a local expression library bound to the user account; a local emoticon that is not bound to the user account;
an online expression library bound to the user account; an online emoticon library not bound to the user account.
Optionally, the target picture carries source information indicating the target video.
Optionally, before the generating the target picture, the method further includes:
determining whether the target video is a video forbidden to be captured;
if yes, hiding the function entrance corresponding to the generated target picture and/or displaying prompt information of forbidding screenshot of the target video.
Optionally, after the generating the target picture, the method further includes:
displaying a sharing control used for sharing the target picture;
responding to the triggering operation of the sharing control, and sharing the target picture to the sharing target indicated by the triggering operation.
Optionally, the method further includes:
adding at least one picture which is generated based on any target video and has a heat index larger than a preset threshold value into the expression library under the condition that the number of pictures in the expression library is less than the preset threshold value;
wherein the heat index is positively correlated with the number of times the corresponding picture is saved and/or used by the user.
According to a second aspect of the embodiments of the present disclosure, a picture saving apparatus is provided, which is applied to a client, and includes:
a playing module configured to play the target video;
a generating module configured to generate a target picture based on at least one frame of image of the target video;
an adding module configured to add the target picture to an expression library for providing alternative expressions for social speech.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the picture saving method according to any of the embodiments of the first aspect.
According to a fourth aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, where instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the picture saving method according to any one of the embodiments of the first aspect.
According to a fifth aspect of the embodiments of the present disclosure, a computer program product is provided, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the picture saving method according to any of the embodiments.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in the technical scheme, the target picture generated based on the target video is stored in the expression library, so that the user can quickly find the target picture from the alternative expressions in the expression library during social speaking, the video screenshot sharing efficiency under the social speaking scene can be obviously improved, and the user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and, together with the description, serve to explain the principles and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating a picture saving method in accordance with an exemplary embodiment;
FIG. 2 is a diagram illustrating an example of an interface for a social utterance, in accordance with an illustrative embodiment;
FIG. 3 is an exemplary diagram illustrating an interface for determining the progress of a screenshot in accordance with an illustrative embodiment;
FIG. 4 is a diagram illustrating another example of an interface for determining the progress of a screenshot, in accordance with an illustrative embodiment;
FIG. 5 is an exemplary diagram illustrating an interface for presenting an alternative dynamic picture in accordance with one illustrative embodiment;
FIG. 6 is an exemplary diagram of an interface for presenting recommended pictures in an emoticon library, according to an exemplary embodiment;
FIG. 7 is a schematic block diagram of a picture saving device shown in an exemplary embodiment;
FIG. 8 is a block diagram of an electronic device in accordance with an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure. It is to be understood that the described embodiments are only a few, and not all embodiments. All other embodiments that can be derived by one of ordinary skill in the art from the disclosure without making any creative effort shall fall within the scope of protection of the disclosure.
When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of systems and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In the related technology, video playing software can preset screenshot and sharing functions, after user screenshot is completed, the software can display a sharing control, so that the user can share the screenshot in specific social software through one key of the sharing control, and therefore the efficiency of sharing the screenshot in the video playing software is remarkably improved.
For example, a screenshot button may be preset in video playing software, and if an interactive operation on the screenshot button is received in a video playing process, a currently played picture can be screenshot, and a one-key sharing button is further displayed, so that when the interactive operation on the one-key sharing button is received, a sharing panel is popped up, and a user can quickly screenshot in the video playing process and share the screenshot in social software such as instant messaging software, an email box and a web blog.
However, the above scheme cannot improve the efficiency of sharing the screenshot in other scenarios; for example, if a user tries to share a screenshot generated in advance by the video playing software in a social comment scenario, the user needs to open a screenshot directory of the video playing software in a system album, retrieve a previously generated screenshot file, and insert the screenshot file into the social comment; the above process is inefficient, resulting in poor user experience.
Based on the above, the present disclosure provides a technical solution for capturing a target picture from a target video in a playing process of the target video and adding the captured target picture to an expression library.
When the method is implemented, the expression library may be an expression library for providing alternative expressions for the social speech; specifically, the expression library may be an expression library of social software other than the video playing software, or may be an expression library of the video playing software itself. For example, part of the video playing software can also provide social speech functions such as video comments and instant barrage, so that the target picture captured in the video playing software can be directly added into an expression library of the video playing software for the user to quickly select the target picture under the condition of the social speech such as video comments and barrage.
In the technical scheme, the target picture generated based on the target video is stored in the expression library, and the screenshot scene is communicated with the social speaking scene, so that the user can quickly find the target picture from the alternative expressions in the expression library when speaking in the social speaking, the video screenshot sharing efficiency under the social speaking scene can be obviously improved, and the user experience is improved.
The following describes the technical solution by using a specific embodiment and combining a specific application scenario.
Referring to fig. 1, fig. 1 is a flowchart illustrating a picture saving method according to an exemplary embodiment, which may be applied to a client, and includes the following steps:
s201, playing a target video;
s202, generating a target picture based on at least one frame of image of the target video;
s203, adding the target picture to an expression library for providing alternative expressions for the social speech.
The client may include any form of electronic device, such as a personal computer, a smart phone, a tablet computer, an interactive television, a smart watch, and the like, and those skilled in the art may apply the technical solution of the present disclosure to a specific electronic device according to a specific service requirement.
In this example, the client may play the target video. Specifically, the client may play the target video in a full-screen playing manner, may play the target video in a split-screen playing manner using a partial screen space of the client, and may play the target video in a floating window or the like. Taking the client as an example of a smart phone, the situation that a movie or a short video stream is played in a full screen mode, a video is played in a half screen mode and video comments are displayed at the same time, a commodity video is played in a floating window when an e-commerce detail page is displayed, and the like can be regarded as a process for playing a target video. It can be seen that the specific playing mode of the target video does not need to be further listed or limited in the present disclosure.
The target video can comprise any kind of video works; from the angle of an acquisition mode, the target video can comprise a live video and an on-demand video; from the content perspective, the target video may include videos produced by professionals, such as movies, dramas, art programs, and the like, and also may include videos produced by users, such as life records, talent shows, skill sharing, and the like; those skilled in the art can apply the above technical solution of the present disclosure to any kind of target video according to specific service requirements.
In this example, the client may generate a target picture based on at least one frame of image of the target video; generally, a video file may include multiple frames of images, and the client may generate a target image according to at least one frame of the multiple frames of images, that is, a screenshot process. In the process of generating the target picture based on the at least one frame of image of the target video, the at least one frame of image may be directly used as the content of the target picture, or the target picture may be obtained by processing the at least one frame of image. For example, if the target video is a video with a specification of 1920 × 1080@24fps (1920 pixels in horizontal direction, 1080 resolution in vertical direction, 24 frames per second), where 12 consecutive frames from the xth second are used for generating the target picture, in this case, in order to reduce the capacity of the generated target picture, the 12 consecutive frames may be compressed, for example, the image size is reduced, the dynamic range is compressed, a picture compression algorithm is applied, and the like, so as to obtain the target picture with small capacity and convenient network transmission; it can be understood that the modes of compressing the image size, the color depth, and the like, and extracting frames, and the like, are all available image processing means, and the specific mode of image processing in the present disclosure does not need to be specifically limited, and a person skilled in the art can select an image processing mode according to specific service requirements and processing performance of the client.
In an embodiment, the target picture may be a target moving picture including a plurality of frames of images; compared with a static picture, a multi-frame image contained in a dynamic picture can carry more information, so that the contents such as actions, expressions and the like which need to change along with time can be better displayed; in addition, the dynamic picture is more interactive and striking compared with the static picture, and the social requirement of the user can be better met in the social speaking.
In this example, the client may add the target picture to an expression library for providing alternative expressions for social speech after the target picture is generated. The social speech can include any behavior of sending information to the social space of the internet, public information such as video comments, microblogs and friend circles, semi-public information such as private letters in a station, e-mails and short messages sent to a specified target, blog logs which are only visible by the user, and the like.
In an embodiment, the social speech may include a social system implemented by video software playing the target video; for example, the target video may be a short video, a short video application playing the short video may have a social system such as video comments and personal dynamics, and a behavior of a user posting the video comments and the personal dynamics in the short video application may be regarded as the social speaking behavior.
It can be understood that when the client adds the target picture to the expression library, the client may display a confirmation control, and after confirmation by the user, add the target picture to the expression library, or may add the target picture in a background silent manner without displaying a confirmation space. The action added to the emoticon library may also be referred to as save, collect, save, etc., and the present disclosure need not be limited to a specific designation for that action.
Referring to fig. 2, fig. 2 is an exemplary diagram illustrating an interface for social speech in an exemplary embodiment; in this example, the social speech specifically refers to a comment posting scenario in the video application, and after the video application receives an instruction of "insert expression" input by the user through the operation control, the video application may present an expression library containing the aforementioned target picture, so that the user may select an expression to be used from the expression library. That is to say, after the scheme of adding the target picture to the expression library is applied, when the user talks in a social mode, the target picture generated according to at least one frame of image in the target video can be directly inserted by directly using the expression adding mode without starting a file manager to search from a screenshot directory, so that convenience of user operation can be remarkably improved.
It can be understood that when a user needs to insert a target picture into social speech content as an expression, besides the above scheme manually selected from the expression library, an intelligent recommendation mode can be adopted; for example, the target picture is a laugh dynamic picture of the star a, and after the target picture is identified by a content identification algorithm or manually identified, a laugh expression tag can be added to the target picture, so that when software identifies that a user inputs laugh, haha, hey and other sentences related to the laugh expression tag in social speech, the laugh dynamic picture of the star a to which the laugh expression tag is added can be automatically recommended, the user can directly confirm to add the dynamic picture, and the input efficiency of the dynamic picture is further improved.
Based on the above embodiments, those skilled in the art can know that, because the target picture generated based on the target video is stored in the expression library and the screenshot scene is communicated with the social speaking scene, the user can quickly find the target picture from the alternative expressions in the expression library during the social speaking, so that the video screenshot sharing efficiency under the social speaking scene can be remarkably improved, and the user experience is improved.
On the basis of the above technical solutions, the present disclosure also provides a number of refinement schemes as follows in order to achieve additional technical effects.
In an embodiment, at least two video frames used for generating the target dynamic picture in the step S102 may be specified by a screenshot instruction; when the method is implemented, the client side can firstly receive a screenshot instruction carrying progress information indicating the progress of the to-be-stored clip in the target video, and then generate a target dynamic picture according to at least two frames of images in the to-be-stored clip corresponding to the progress information. For example, a shortcut button for capturing the latest 3 seconds may be preset in the client interface, and a screenshot instruction indicating that the progress of the to-be-saved clip in the target video is 3 seconds before the current progress is received when a trigger operation of the shortcut button by the user is received; then, the video clip corresponding to "3 seconds before the current progress" can be used as a clip to be saved, so as to generate a target dynamic picture according to at least two frames of images in the clip.
By adopting the embodiment, a person skilled in the art can flexibly set the acquisition mode of the screenshot instruction, so that a user can flexibly select the video clip for screenshot.
In an embodiment, the progress of the segment to be saved in the target video can be indirectly indicated through a progress indication control of the target video; specifically, the progress indication control may include a control for indicating the playing progress of the target video on the software interface, and may be in a shape of a bar, a ring, a column, and the like, where the bar-shaped progress indication control is also referred to as a "progress bar"; the present disclosure does not limit the specific form of the progress indication control described above. When the method is realized, the client can respond to the triggering operation of the preset screenshot function control and display a starting point mark and an end point mark in a corresponding area of the progress indication control of the target video; the positions of the starting point mark and the end point mark are respectively used for indicating the starting point and the end point of the segment to be stored in the target video; and then, responding to the position adjustment operation of the starting point mark and/or the end point mark, adjusting the position of the starting point mark and/or the end point mark, and then generating a screenshot instruction carrying the video progress corresponding to the latest position of the starting point mark and the end point mark. It can be understood that the triggering operation on the preset screenshot function control can be operations such as clicking, long-time pressing, dragging and the like on a screenshot button on an interface; the screenshot command can be executed in response to the confirmation operation of the user or in response to a preset no-operation timer when being generated; for example, when the user does not move the start point mark and/or the end point mark for 3 seconds, a screenshot command can be triggered to be generated; the software implementation details can be designed by a person skilled in the art according to product requirements, and the disclosure does not limit the specific form of the trigger operation of the preset screenshot function control nor the specific content of the trigger mechanism for generating the screenshot instruction.
Referring to fig. 3, fig. 3 is an exemplary diagram of an interface for determining a screenshot progress, shown in an exemplary embodiment, in which a client uses a progress bar to indicate a playing progress of a target video, and after receiving a trigger operation on a preset screenshot function control, adds and displays a start point marker and an end point marker on the progress bar, where the start point marker and the end point marker may be moved by a user by dragging, and generates a screenshot instruction carrying progress information corresponding to a position of the start point marker and the end point marker on the progress bar based on the position of the start point marker and the end point marker on the progress bar after the user clicks a "confirmation" control in the diagram.
By applying the embodiment, the progress bar can intuitively indicate the playing progress of the target video, so that the movable starting point mark and the movable end point mark can more intuitively show the progress of the segment to be stored in the whole target video, and the interaction experience of a user is improved.
In another embodiment, the starting point and the ending point of the segment to be saved in the target video can be realized without depending on the progress bar; when the target video playing is realized, the client can sequentially receive a screenshot starting instruction and a screenshot stopping instruction in the playing process of the target video; the screenshot starting instruction and the screenshot stopping instruction are respectively used for indicating a starting point and an end point of the to-be-stored clip in the target video; and then, the client can generate a screenshot instruction carrying the video progress corresponding to the screenshot starting instruction and the video progress corresponding to the screenshot stopping instruction. Referring to FIG. 4, FIG. 4 is another exemplary diagram of an interface for determining the progress of a screenshot, according to an illustrative embodiment; in this example, the target video is in a playing state, and the user may press the screenshot button in the diagram when playing to the start point of the segment to be saved, and release the screenshot button in the diagram when playing to the end point of the segment to be saved, so that the client may identify the pressed and released instruction of the screenshot button as the screenshot starting instruction and the screenshot stopping instruction, respectively, and further generate the screenshot instruction carrying the video progress corresponding to the screenshot starting instruction and the video progress corresponding to the screenshot stopping instruction.
By applying the scheme, the progress of the segment to be stored can be determined more conveniently from the video being played without depending on the progress bar, and particularly, under application scenes such as short videos, the video watching experience of a user can be guaranteed not to be interrupted, and the use experience of the user is improved.
In an embodiment, the process of generating the target picture based on at least one frame of image of the target video may be implemented by an intelligent recommendation algorithm; specifically, the client may determine, in the target video, at least one segment to be saved containing preset high-value content based on a content recognition algorithm; generating alternative dynamic pictures corresponding to the fragments to be saved respectively based on at least two frames of images in the fragments to be saved; and in response to the selection operation, determining at least one alternative dynamic picture as the target dynamic picture. Referring to FIG. 5, FIG. 5 is a diagram illustrating an example of an interface for presenting alternative dynamic pictures in accordance with an illustrative embodiment; in this example, the client may present 4 candidate pictures respectively corresponding to 4 segments to be saved containing preset high-value content, and determine, as the target moving picture, candidate picture 1 and candidate picture 4 shown in the figure by receiving a selection operation of the user.
It is understood that a person skilled in the art may design the content recognition algorithm for determining the segment to be saved in which the high-value content is located, for example, an algorithm based on face recognition may be used to filter facial expressions, etc., and further enumeration and disclosure of implementation details thereof are not required in the present disclosure; moreover, the client may execute the content recognition algorithm locally in the client segment, or may send a request to the server to enable the server to execute the content recognition algorithm and return a recognition result, and the disclosure does not need to limit a specific execution subject of the content recognition algorithm.
By applying the scheme, the alternative pictures are automatically generated by the algorithm, so that the user does not need to manually select the video clip of the screenshot, but directly selects the alternative pictures containing high-value content, thereby reducing the redundant operation of the user and improving the efficiency of video screenshot.
In one embodiment, the specific storage location and the dependency relationship of the expression library can be flexibly set; for example, the expression library may be a local expression library, or an online expression library stored in the cloud, which may be a expression library bound to a user account, or a sharable expression library not bound to the user account. For example, if the intercepted emotions are stored locally and account numbers are not distinguished, that is, the emotions stored by the user a are not a local emotions library bound to the user account, and can also be accessed and used by the user b using the same device; and if the intercepted expression is stored in the cloud end, the account number is distinguished, namely, the expression library is an online expression library bound with the user account, so that the expression stored by the user A can be accessed as long as the account number of the user A is logged in no matter whether the user A switches equipment.
The effects generated by analyzing various expression libraries can be known, the local expression library is favorable for quick access and reduction of operation time delay, the online expression library is favorable for ensuring that the stored expressions are not lost when equipment is replaced, the expression library bound with a user account can be used for improving the personalization level of the expression library and meeting personal requirements, and the expression library not bound with the user account can be used for improving the universality of the expression library and enabling excellent expressions to be used by more people. Therefore, by applying the scheme, the specific storage position and the dependency relationship of the expression library can be flexibly selected, so that the method is beneficial to adapting to different product requirements and improving user experience.
In an embodiment, the target picture may further carry source information; specifically, the source information may be used to indicate source information of a corresponding target video; for example, if the target video is an original video of a certain video author, the watermark of the video author can be carried in the correspondingly generated target picture to indicate the author of the target video; for another example, if the target video is an exclusive contract video of a certain video platform, a trademark of the video platform and the like can be carried in the correspondingly generated target picture.
By applying the scheme, the requirement of a video creator or a video platform on the maintenance of the video copyright can be met, and the generated target picture can not be illegally stolen.
In an embodiment, the method may include a screenshot authority confirmation link; specifically, before the target picture is generated, whether the target video is a video for which screen capture is prohibited or not may be determined; if so, hiding the function entrance corresponding to the generated target picture and/or displaying the prompt information of the target video forbidden screenshot. For example, if a target video is an exclusive contract video of a certain video platform, the video platform expects that the target video cannot be captured within the first 24 hours of online, so that more people can directly watch the target video, the target video can be marked as a video for which capture is prohibited in the first 24 hours of online of the target video, the client can determine that the target video is the video for which capture is prohibited, and hide a capture button of the video, so that a user cannot trigger a capture function built in the client; or, the client may pop up a prompt to inform the user that the video cannot be captured currently, and so on. It is to be understood that the present disclosure is not limited to the particular form of the function entry described above, nor to the particular form of the prompt message described above.
By applying the scheme, the requirement of a video creator or a video platform on the maintenance of the video copyright can be met, and a target picture cannot be generated by a client under the condition that the target video is forbidden to capture the picture.
In an embodiment, the method may further include a sharing process of the target picture. Specifically, after the target picture is generated, a sharing control for sharing the target picture may be displayed; and responding to the triggering operation of the sharing control, and sharing the target picture to the sharing target indicated by the triggering operation. For example, the target picture may be shared in other social applications in a cross-application message manner, or the target picture may be shared with a contact in a social system of the video playing application itself, and so on. The specific form of the sharing control and the sharing target which can be selected do not need to be limited specifically.
By applying the scheme, the user can share the target picture captured from the target video more quickly, and the mobility of the video screenshot in the social relationship network is further improved.
In an embodiment, the expression library may further be configured with an expression recommendation function. Specifically, the client may add at least one picture, which is generated based on any target video and has a heat index greater than a preset threshold, to the expression library when determining that the number of pictures in the expression library is less than the preset threshold; wherein the heat index is positively correlated with the number of times the corresponding picture is saved and/or used by the user. Referring to fig. 6, fig. 6 is an exemplary diagram of an interface for presenting recommended pictures in an emoticon library, according to an exemplary embodiment; in this example, the preset threshold of the number of pictures is 3, and since the number of pictures in the current expression library is 2 and less than 3, the client adds three popular target video screenshot pictures of a recommended expression a, a recommended expression B and a recommended expression C to the expression library; whether the video screenshot picture is "hot" or not can be evaluated through the heat index; for example, the client may collect video screenshot pictures generated and shared by other users or the client, obtain a corresponding popularity index of the video screenshot pictures, and add the video screenshot pictures with a high popularity index to the expression library used by the current user.
It can be understood that the heat index of the video screenshot picture can be positively correlated with the times that the video screenshot picture is saved and/or used by the user; in this way, the pictures recommended to be added to the expression library are video screenshot pictures that are stored and/or used by the user a sufficiently high number of times. By applying the scheme, the video screenshot with high use frequency or high storage frequency can be intelligently recommended to the user as the expression under the condition that the user does not store enough video screenshots as the expression, so that the user can more conveniently use the video screenshots as the expression, the propagation of high-quality video screenshots can be promoted, and the influence of the corresponding target video is improved.
The above contents are all embodiments of the present disclosure directed to the picture saving method. The present disclosure also provides an embodiment of a corresponding picture saving apparatus as follows:
referring to fig. 7, fig. 7 is a schematic block diagram illustrating a picture saving device according to an exemplary embodiment, which may be applied to a client, and may include the following modules:
a playing module 701 configured to play the target video;
a generating module 702 configured to generate a target picture based on at least one frame of image of the target video;
an adding module 703 configured to add the target picture to an expression library for providing alternative expressions for social speech.
In an embodiment, the target picture may be a target moving picture including a plurality of frames of images; compared with a static picture, a multi-frame image contained in a dynamic picture can carry more information, so that the contents such as actions, expressions and the like which need to change along with time can be better displayed; in addition, the dynamic picture is more interactive and striking compared with the static picture, and the social requirement of the user can be better met in the social speaking.
In an embodiment, the social speech may include a social system implemented by video software playing the target video; for example, the target video may be a short video, a short video application playing the short video may have a social system such as video comments and personal dynamics, and a behavior of a user posting the video comments and the personal dynamics in the short video application may be regarded as the social speaking behavior.
It can be understood that, when the adding module 703 adds the target picture to the expression library, the adding module may display a confirmation control, and after confirmation by the user, add the target picture to the expression library, or may not display a confirmation space and add the target picture in a background silent manner. The action added to the emoticon library may also be referred to as save, collect, save, etc., and the present disclosure need not be limited to a specific designation for that action.
In an embodiment, at least two video frames used for generating the target dynamic picture in the generating module 702 may be specified by a screenshot instruction; in implementation, the generating module 702 may include an instruction receiving sub-module configured to receive a screenshot instruction carrying progress information indicating a progress of a to-be-saved clip in the target video, and an image generating sub-module configured to generate a target dynamic image according to at least two frames of images in the to-be-saved clip corresponding to the progress information. By adopting the scheme, a person skilled in the art can flexibly set the acquisition mode of the screenshot instruction, so that a user can flexibly select the video clip for screenshot conveniently.
In an embodiment, the progress of the segment to be saved in the target video can be indirectly indicated through a progress indication control of the target video; specifically, the progress indication control may include a control for indicating the playing progress of the target video on the software interface, and may be in a shape of a bar, a ring, a column, and the like, where the bar-shaped progress indication control is also referred to as a "progress bar"; the present disclosure does not limit the specific form of the progress indication control described above. When implemented, the instruction receiving submodule may be further configured to: responding to the triggering operation of a preset screenshot function control, and displaying a starting point mark and an end point mark in a corresponding area of the progress indication control of the target video; the positions of the starting point mark and the end point mark are respectively used for indicating the starting point and the end point of the segment to be stored in the target video; and then, responding to the position adjustment operation of the starting point mark and/or the end point mark, adjusting the position of the starting point mark and/or the end point mark, and then generating a screenshot instruction carrying the video progress corresponding to the latest position of the starting point mark and the end point mark.
By applying the embodiment, the progress bar can intuitively indicate the playing progress of the target video, so that the movable starting point mark and the movable end point mark can more intuitively show the progress of the segment to be stored in the whole target video, and the interaction experience of a user is improved.
In another embodiment, the starting point and the ending point of the segment to be saved in the target video can be realized without depending on the progress bar; when the video playing system is implemented, the instruction receiving submodule may be further configured to sequentially receive a screenshot starting instruction and a screenshot stopping instruction in the playing process of the target video; the screenshot starting instruction and the screenshot stopping instruction are respectively used for indicating a starting point and an end point of the to-be-stored clip in the target video; and then generating a screenshot instruction carrying the video progress corresponding to the screenshot starting instruction and the video progress corresponding to the screenshot stopping instruction.
By applying the scheme, the progress of the segment to be stored can be determined more conveniently from the video being played without depending on the progress bar, and particularly, under application scenes such as short videos, the video watching experience of a user can be guaranteed not to be interrupted, and the use experience of the user is improved.
In an embodiment, the process of generating the target picture by the generating module 702 based on at least one frame of image of the target video may be implemented by an intelligent recommendation algorithm; specifically, the generating module 702 may include an identifying submodule, a batch generating submodule, and a selecting submodule, where the identifying submodule may be configured to determine, in the target video, at least one segment to be saved that includes preset high-value content based on a content identification algorithm; the mass production submodule may be configured to generate alternative dynamic pictures corresponding to the segments to be saved, based on at least two frames of images in the segments to be saved, respectively; the selection sub-module may be configured to determine, in response to a selection operation, at least one alternative moving picture as the target moving picture.
It is understood that a person skilled in the art may design the content recognition algorithm for determining the segment to be saved in which the high-value content is located, for example, an algorithm based on face recognition may be used to filter facial expressions, etc., and further enumeration and disclosure of implementation details thereof are not required in the present disclosure; moreover, the client may execute the content recognition algorithm locally in the client segment, or may send a request to the server to enable the server to execute the content recognition algorithm and return a recognition result, and the disclosure does not need to limit a specific execution subject of the content recognition algorithm.
By applying the scheme, the alternative pictures are automatically generated by the algorithm, so that the user does not need to manually select the video clip of the screenshot, but directly selects the alternative pictures containing high-value content, thereby reducing the redundant operation of the user and improving the efficiency of video screenshot.
In one embodiment, the specific storage location and the dependency relationship of the expression library can be flexibly set; for example, the expression library may be a local expression library, or an online expression library stored in the cloud, which may be a expression library bound to a user account, or a sharable expression library not bound to the user account. The effects generated by analyzing various expression libraries can be known, the local expression library is favorable for quick access and reduction of operation time delay, the online expression library is favorable for ensuring that the stored expressions are not lost when equipment is replaced, the expression library bound with a user account can be used for improving the personalization level of the expression library and meeting personal requirements, and the expression library not bound with the user account can be used for improving the universality of the expression library and enabling excellent expressions to be used by more people.
Therefore, by applying the scheme, the specific storage position and the dependency relationship of the expression library can be flexibly selected, so that the method is beneficial to adapting to different product requirements and improving user experience.
In an embodiment, the target picture may further carry source information; specifically, the source information may be used to indicate source information of a corresponding target video; for example, if the target video is an original video of a certain video author, the watermark of the video author can be carried in the correspondingly generated target picture to indicate the author of the target video; for another example, if the target video is an exclusive contract video of a certain video platform, a trademark of the video platform and the like can be carried in the correspondingly generated target picture.
By applying the scheme, the requirement of a video creator or a video platform on the maintenance of the video copyright can be met, and the generated target picture can not be illegally stolen.
In an embodiment, the apparatus may include a determining module, configured to perform screenshot authority confirmation; specifically, the determining module may be configured to determine whether the target video is a video prohibited from being captured before the target picture is generated; and if so, hiding the function entrance corresponding to the generated target picture and/or displaying prompt information of forbidding screenshot of the target video. It is to be understood that the present disclosure is not limited to the particular form of the function entry described above, nor to the particular form of the prompt message described above.
By applying the scheme, the requirement of a video creator or a video platform on the maintenance of the video copyright can be met, and a target picture cannot be generated by a client under the condition that the target video is forbidden to capture the picture.
In an embodiment, the apparatus may further include a sharing module, configured to implement a sharing process of the target picture. Specifically, the sharing module may be configured to expose a sharing control for sharing the target picture after the target picture is generated; and responding to the triggering operation of the sharing control, and sharing the target picture to the sharing target indicated by the triggering operation. For example, the target picture may be shared in other social applications in a cross-application message manner, or the target picture may be shared with a contact in a social system of the video playing application itself, and so on. The specific form of the sharing control and the sharing target which can be selected do not need to be limited specifically.
By applying the scheme, the user can share the target picture captured from the target video more quickly, and the mobility of the video screenshot in the social relationship network is further improved.
In an embodiment, the expression library may further be configured with an expression recommendation function. Specifically, the device may further include a recommendation module configured to, in a case that it is determined that the number of pictures in the expression library is less than a preset threshold, add at least one picture, which is generated based on any target video and has a heat index greater than the preset threshold, to the expression library; wherein the heat index is positively correlated with the number of times the corresponding picture is saved and/or used by the user.
By applying the scheme, the video screenshot with high use frequency or high storage frequency can be intelligently recommended to the user as the expression under the condition that the user does not store enough video screenshots as the expression, so that the user can more conveniently use the video screenshots as the expression, the propagation of high-quality video screenshots can be promoted, and the influence of the corresponding target video is improved.
The specific implementation of the apparatus in the above embodiments, in which each module is described in detail in the embodiments describing the corresponding method, will not be elaborated herein.
An embodiment of the present disclosure also provides an electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the picture saving method according to any of the above embodiments.
Embodiments of the present disclosure also provide a computer-readable storage medium, where instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the picture saving method according to any of the above embodiments.
An embodiment of the present disclosure further provides a computer program product, which includes a computer program, and is characterized in that when being executed by a processor, the computer program implements the picture saving method according to any of the above embodiments.
Fig. 8 is a schematic block diagram illustrating an electronic device in accordance with an embodiment of the present disclosure. Referring to fig. 8, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 818. The electronic device described above may employ a similar hardware architecture.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the picture saving method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each of the front camera and the rear camera may be a fixed or optical lens system with a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 818. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 818 is configured to facilitate communications between the electronic device 800 and other devices in a wired or wireless manner. The electronic device 800 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 8G), or a combination thereof. In an exemplary embodiment, the communication component 818 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 818 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an embodiment of the present disclosure, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-mentioned picture saving method.
In an embodiment of the present disclosure, a computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the electronic device 800 to perform the above-described picture saving method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
It is noted that, in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The method and apparatus provided by the embodiments of the present disclosure are described in detail above, and the principles and embodiments of the present disclosure are explained herein by applying specific examples, and the above description of the embodiments is only used to help understanding the method and core ideas of the present disclosure; meanwhile, for a person skilled in the art, based on the idea of the present disclosure, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present disclosure should not be construed as a limitation to the present disclosure.

Claims (10)

1. A picture storage method is applied to a client side and is characterized by comprising the following steps:
playing the target video;
generating a target picture based on at least one frame of image of the target video;
and adding the target picture to an expression library for providing alternative expressions for the social speech.
2. The method according to claim 1, wherein the target picture is a target dynamic picture comprising a plurality of frames of images.
3. The method of claim 2, wherein generating a target picture based on at least one image of the target video comprises:
receiving a screenshot instruction; the screenshot instruction carries progress information indicating the progress of the clip to be stored in the target video;
and generating a target dynamic picture based on at least two frames of images in the segment to be stored corresponding to the progress information.
4. The method of claim 3, wherein receiving the screenshot command comprises:
responding to a triggering operation of a preset screenshot function control, and displaying a starting point mark and an end point mark in a corresponding area of a progress indication control of the target video; the positions of the starting point mark and the end point mark indicate the starting point and the end point of the segment to be stored in the target video; adjusting the position of the start point marker and/or the end point marker in response to a position adjustment operation on the start point marker and/or the end point marker;
and generating a screenshot instruction, wherein the screenshot instruction carries the video progress corresponding to the latest positions of the starting point mark and the end point mark.
5. The method of claim 3, wherein receiving the screenshot command comprises:
in the playing process of the target video, receiving a screenshot starting instruction and a screenshot stopping instruction in sequence; the screenshot starting instruction and the screenshot stopping instruction are respectively used for indicating a starting point and an end point of the to-be-stored clip in the target video;
and generating a screenshot instruction, wherein the screenshot instruction carries the video progress corresponding to the screenshot starting instruction and the video progress corresponding to the screenshot stopping instruction.
6. The method of claim 2, wherein generating a target picture based on at least one image of the target video comprises:
in the target video, determining at least one segment to be stored containing preset high-value content based on a content identification algorithm;
generating alternative dynamic pictures corresponding to the fragments to be stored respectively based on at least two frames of images in the fragments to be stored;
and in response to the selection operation, determining at least one alternative dynamic picture as the target dynamic picture.
7. The utility model provides a picture save set, is applied to the customer end, its characterized in that includes:
a playing module configured to play the target video;
a generating module configured to generate a target picture based on at least one frame of image of the target video;
an adding module configured to add the target picture to an expression library for providing alternative expressions for social speech.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the picture saving method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions, when executed by a processor of an electronic device, enable the electronic device to perform the picture saving method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program realizes the picture saving method of any one of claims 1 to 6 when executed by a processor.
CN202110843906.XA 2021-07-26 2021-07-26 Picture saving method and device Pending CN113568551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110843906.XA CN113568551A (en) 2021-07-26 2021-07-26 Picture saving method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110843906.XA CN113568551A (en) 2021-07-26 2021-07-26 Picture saving method and device

Publications (1)

Publication Number Publication Date
CN113568551A true CN113568551A (en) 2021-10-29

Family

ID=78167336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110843906.XA Pending CN113568551A (en) 2021-07-26 2021-07-26 Picture saving method and device

Country Status (1)

Country Link
CN (1) CN113568551A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037491A1 (en) * 2022-08-15 2024-02-22 北京字跳网络技术有限公司 Media content processing method and apparatus, device, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527690B (en) * 2009-04-13 2012-03-21 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN110149549A (en) * 2019-02-26 2019-08-20 腾讯科技(深圳)有限公司 The display methods and device of information
CN110572706A (en) * 2019-09-29 2019-12-13 深圳传音控股股份有限公司 Video screenshot method, terminal and computer-readable storage medium
CN111901695A (en) * 2020-07-09 2020-11-06 腾讯科技(深圳)有限公司 Video content interception method, device and equipment and computer storage medium
CN111954087A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Method and device for intercepting images in video, storage medium and electronic equipment
CN112437353A (en) * 2020-12-15 2021-03-02 维沃移动通信有限公司 Video processing method, video processing apparatus, electronic device, and readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527690B (en) * 2009-04-13 2012-03-21 腾讯科技(北京)有限公司 Method for intercepting dynamic image, system and device thereof
CN106658079A (en) * 2017-01-05 2017-05-10 腾讯科技(深圳)有限公司 Customized expression image generation method and device
CN108055587A (en) * 2017-11-30 2018-05-18 星潮闪耀移动网络科技(中国)有限公司 Sharing method, device, mobile terminal and the storage medium of image file
CN110149549A (en) * 2019-02-26 2019-08-20 腾讯科技(深圳)有限公司 The display methods and device of information
CN110572706A (en) * 2019-09-29 2019-12-13 深圳传音控股股份有限公司 Video screenshot method, terminal and computer-readable storage medium
CN111901695A (en) * 2020-07-09 2020-11-06 腾讯科技(深圳)有限公司 Video content interception method, device and equipment and computer storage medium
CN111954087A (en) * 2020-08-20 2020-11-17 腾讯科技(深圳)有限公司 Method and device for intercepting images in video, storage medium and electronic equipment
CN112437353A (en) * 2020-12-15 2021-03-02 维沃移动通信有限公司 Video processing method, video processing apparatus, electronic device, and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037491A1 (en) * 2022-08-15 2024-02-22 北京字跳网络技术有限公司 Media content processing method and apparatus, device, and storage medium

Similar Documents

Publication Publication Date Title
CN106028166B (en) Live broadcast room switching method and device in live broadcast process
RU2640632C2 (en) Method and device for delivery of information
CN112272302A (en) Multimedia resource display method, device, system and storage medium
CN104113785A (en) Information acquisition method and device
CN111343476A (en) Video sharing method and device, electronic equipment and storage medium
CN109521918B (en) Information sharing method and device, electronic equipment and storage medium
CN107463643B (en) Barrage data display method and device and storage medium
CN113065008A (en) Information recommendation method and device, electronic equipment and storage medium
CN107423386B (en) Method and device for generating electronic card
CN113259226B (en) Information synchronization method and device, electronic equipment and storage medium
CN113573092B (en) Live broadcast data processing method and device, electronic equipment and storage medium
CN114025181B (en) Information display method and device, electronic equipment and storage medium
CN112511779B (en) Video data processing method and device, computer storage medium and electronic equipment
CN106331328B (en) Information prompting method and device
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN114025180A (en) Game operation synchronization system, method, device, equipment and storage medium
CN105872573A (en) Video playing method and apparatus
CN107729098B (en) User interface display method and device
CN113901241B (en) Page display method and device, electronic equipment and storage medium
CN108989191B (en) Method for withdrawing picture file, control method and device thereof, and mobile terminal
CN114554231A (en) Information display method and device, electronic equipment and storage medium
US11600300B2 (en) Method and device for generating dynamic image
CN111526380B (en) Video processing method, video processing device, server, electronic equipment and storage medium
CN113568551A (en) Picture saving method and device
CN112040257A (en) Method and device for pushing information in live broadcast room

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination