CN111935505B - Video cover generation method, device, equipment and storage medium - Google Patents
Video cover generation method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN111935505B CN111935505B CN202010746773.XA CN202010746773A CN111935505B CN 111935505 B CN111935505 B CN 111935505B CN 202010746773 A CN202010746773 A CN 202010746773A CN 111935505 B CN111935505 B CN 111935505B
- Authority
- CN
- China
- Prior art keywords
- video
- special effect
- frame
- target
- cover
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 230000000694 effects Effects 0.000 claims abstract description 271
- 239000000463 material Substances 0.000 claims abstract description 108
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 49
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000003796 beauty Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims 2
- 230000008569 process Effects 0.000 abstract description 20
- 230000003068 static effect Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 6
- 238000005034 decoration Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002035 prolonged effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47205—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8146—Monomedia components thereof involving graphical data, e.g. 3D object, 2D graphics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Studio Circuits (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The application provides a video cover generation method, a device, equipment and a storage medium, which relate to the field of image processing, and the method previews an original video and selects a target video frame from the original video according to the selection operation of a user in the previewing process under a video content editing interface; selecting target special effect materials for a target video frame from the pre-downloaded video special effect materials, and synthesizing the target special effect materials to obtain a special effect image; and synthesizing the special effect image and the target video frame to obtain a video cover. According to the technical scheme, the situation that one or more frames are selected as the cover after the whole original video is prepared into the special-effect video is not needed to be waited, so that the process of generating the video cover is simpler and faster, and the time for a user to make the video cover is shortened.
Description
Technical Field
The present application relates to the field of video processing, and in particular, to a method, an apparatus, a device, and a storage medium for generating a video cover.
Background
In a video interaction platform, especially in a small video application, a user can select one or more images to compose a video, or add some special effects such as stickers, characters, filter effects and the like on an original video to enhance the expressive force of the video. In order to improve the access amount of the video, a user selects one or more frames of video images from the video as a cover page to attract the audience to watch.
In the conventional technology related to cover generation, a user generally synthesizes a special effect material into an original video to obtain a special effect video, and then selects one or more frames of special effect video images from the synthesized special effect video as a video cover.
Disclosure of Invention
The object of the present application is to solve at least one of the above-mentioned technical drawbacks, in particular the problem of long time-consuming cover production.
In a first aspect, an embodiment of the present application provides a method for generating a video cover, including the following steps:
previewing an original video and selecting a target video frame from the original video according to the selection operation of a user in the previewing process under a video content editing interface;
selecting target special effect materials for the target video frame from the pre-downloaded video special effect materials, and synthesizing the target special effect materials to obtain a special effect image;
and synthesizing the special effect image and the target video frame to obtain a video cover.
In one embodiment, the step of previewing the original video and selecting the target video frame from the original video according to the selection operation of the user in the previewing process comprises the following steps:
playing the original video in a preview mode under a video content editing interface;
selecting a first frame image from the original video according to selection operation of a user, acquiring a plurality of second frame images at preset time intervals according to the first frame image, and collecting the first frame image and the second frame images to obtain a target video frame.
In an embodiment, the step of selecting a target special effect material for the target video frame from pre-downloaded video special effect materials, and synthesizing the target special effect material to obtain a special effect image includes:
adding a transparent layer on the target video frame;
acquiring video special effect materials downloaded in advance, acquiring target special effect materials selected by a user for the target video frame, and respectively placing the target special effect materials at the appointed positions of the transparent layer;
and synthesizing the transparent layer and the target special effect material to obtain a special effect image.
In an embodiment, after the step of synthesizing the transparent layer and the target special effect material to obtain a special effect image, the method further includes:
adding a feature identifier for the special effect image, establishing a corresponding relation between the feature identifier and the target video frame, and storing the special effect image in a specified directory;
the step of synthesizing the special effect image and the target video frame to obtain a video cover comprises the following steps:
searching a special effect image corresponding to the target video frame from the specified directory according to the corresponding relation;
and synthesizing the special effect image and the target video frame corresponding to the special effect image into a special effect cover.
In one embodiment, the target video frame is multiple, and the special effect image is multiple;
the synthesizing the special effect image and the target video frame to obtain the video cover comprises:
and respectively synthesizing each special effect image and the corresponding target video frame to obtain a special effect cover, and sequentially inputting each special effect cover into the dynamic image generator to generate a dynamic special effect cover.
In an embodiment, before selecting the target video frame, the method further includes:
and downloading the required video special effect materials from the special effect material library in advance after entering a video content editing interface.
In one embodiment, the video effect material includes one or more of a filter, a sticker, a text style, or a beauty effect.
In a second aspect, an embodiment of the present application further provides a video cover generating apparatus, including:
the video frame selection module is used for previewing an original video and selecting a target video frame from the original video in the previewing process under a video content editing interface;
the special effect image synthesis module is used for selecting target special effect materials for the target video frames from the pre-downloaded video special effect materials and synthesizing the target special effect materials to obtain a special effect image;
and the video cover synthesizing module is used for synthesizing the special effect image and the target video frame to obtain a video cover.
In a third aspect, an embodiment of the present application further provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the steps of the video cover generation method according to the first aspect.
In a fourth aspect, embodiments of the present application further provide a storage medium containing computer-executable instructions for performing the steps of the video cover generation method according to the first aspect when executed by a computer processor.
According to the video cover generation method, the video cover generation device, the video cover generation equipment and the storage medium, the relevant video special effect material is downloaded in advance after the video content editing interface is entered, the downloading is not needed when the cover is made, the target video frame is selected when the original video is previewed, the target video frame and the corresponding special effect image are synthesized to obtain the video cover, one or more frames are selected as the cover after the whole original video is prepared into the special effect video, the process of generating the video cover is simpler and faster, and the time for a user to make the video cover is shortened.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic diagram of a cover selection entry interface provided in one embodiment;
FIG. 2 is a schematic diagram of a cover selection and editing interface provided by an embodiment;
FIG. 3 is a flow diagram of a method for video cover generation according to an embodiment;
FIG. 4 is a diagram of an application scenario of a video cover generation method according to an embodiment;
FIG. 5 is a flow diagram of a method for generating a still video cover according to an embodiment;
FIG. 6 is a flowchart of a dynamic special effect cover generation method according to an embodiment;
fig. 7 is a schematic structural diagram of a video cover generation apparatus according to an embodiment.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
To better explain the technical solution of the present disclosure, a description is first given of a flow of video cover generation in the related art.
In related video production, users often need to set up video covers to attract other users to browse. Taking small video or short video production as an example, a user selects an original image or an original video for producing a video through a related video production client, such as an application program of an electronic photo album, a short video production application program and the like, in the video production processes, the user synthesizes the original image or the original video into a final video, the video is directly used for publishing, the video can be a video synthesized with special effects or a video without special effects, and one or more frames of images are selected from the synthesized video after the video is synthesized to be used as a cover page.
Fig. 1 is a schematic diagram of a front cover selection entry interface provided in an embodiment, as shown in fig. 1, after a small video is synthesized, a transition is made to a video publishing interface, that is, a front cover selection entry interface, a "front cover selection" entry is set on the front cover selection entry interface to guide a user to select a frame of image from the synthesized small video as a front cover, the user clicks the "front cover selection" entry to enter a front cover selection and editing interface shown in fig. 2, fig. 2 is a schematic diagram of the front cover selection and editing interface provided in an embodiment, and one or more frames of images are selected from the synthesized small video, so that the selected images can be edited, and images synthesized with new text and special effects, such as text addition and special effects, are saved to serve as a front cover.
However, the video composition is a time-consuming process, and if one frame of image is selected after the video composition is waited, and the frame of image is edited to serve as a cover, the generation time of the video cover is obviously increased, so that the use experience of a user is influenced.
Based on this, the embodiment of the disclosure provides a generation scheme of a video cover, and a user can edit an image as a cover in the process of previewing a video without selecting the cover after video synthesis, so that the use experience of the user is improved.
Fig. 3 is a flowchart of a video cover generation method according to an embodiment, where the video cover generation method is applicable to a video cover generation device, such as a client, fig. 4 is an application scene diagram of the video cover generation method according to an embodiment, and referring to fig. 4, the client 10 may be a portable device such as a smart phone, a smart camera, a palm computer, a tablet computer, an electronic book, and a notebook computer, and may have functions of taking a picture, processing an image, and the like, so as to generate a video cover. Optionally, the client 10 has a touch screen, and the user may perform corresponding operations on the touch screen of the client 10 to implement functions such as image processing, video composition, and video cover generation.
Specifically, as shown in fig. 3, the video cover generation method may include the following steps:
s110, previewing an original video and selecting a target video frame from the original video according to the selection operation of a user in the previewing process under the video content editing interface.
The user opens the video content editing interface, clicks the preview key to preview the original video, the client detects the preview instruction triggered by the user to make the original video enter the preview state playing mode,
in a preview state, a video content editing interface displays a playing progress bar (or a video track) of an original video, the playing progress bar presents a video frame picture of the currently played original video, and a timestamp scale on the playing progress bar indicates the position of the currently played video frame. The user may select one or more frames of video images from the original video as target video frames during the preview process.
In an embodiment, if the target video frame selected by the user is a frame, a static cover is made from the target video frame of the frame, at this time, by acquiring any one frame of image acquired from the original video as the target video frame, optionally, when the user selects a certain frame in the original video, the screen capture saves a bitmap image of the frame of video, for example, when the user selects the frame, the source data of the frame of video is found from the data source address according to the timestamp of the frame in the original video, and the target video frame is generated by using the source data.
In an embodiment, if the target video frame selected by the user is a plurality of frames, and a dynamic cover is made by the target video frames of the plurality of frames, the steps of previewing the original video and selecting the target video frame from the original video in the previewing process include:
s1101, playing the original video in a preview mode under a video content editing interface.
Under a video content editing interface, when an original video is played in a preview mode, a progress bar of the original video is displayed on the video content editing interface, the progress bar presents a video frame picture of the currently played original video, and a timestamp scale on the progress bar indicates the position of the currently played video frame.
S1102, selecting a first frame image from the original video according to selection operation of a user, acquiring a plurality of second frame images at preset time intervals according to the first frame image, and collecting the first frame image and the second frame images to obtain a target video frame.
The user makes a selection operation in a preset selection manner, for example, clicks a current frame of the original video to make a selection operation, or inputs a specific timestamp to make a selection operation, and determines a frame corresponding to the timestamp as a selected target video frame.
When a user selects one frame from an original video as a first frame image, taking the first frame image as a starting point, and acquiring a plurality of second frame images at a preset time interval, if a timestamp corresponding to the first frame image is 0 and the preset time interval is 5 seconds, acquiring video frames with timestamps corresponding to 0, 15, 0, 20, 0.
In another embodiment, when the user plays the original video, multiple frames of images are clicked in sequence, multiple frames of images selected by the user are acquired in sequence, and the multiple frames of images are collected as the target video frames.
In one embodiment, after entering the video content editing interface, the required video special effects material is downloaded from the special effects material library in advance. Optionally, the video effect material includes one or more of a filter, a sticker, a text style, or a beauty effect. Because it is time consuming to download the video special effect material, in this embodiment, after entering the video content editing interface, the required video special effect material is automatically downloaded from the special effect material library, so that it is avoided that the video special effect material is downloaded only when the subsequent video cover is synthesized, and the waiting time of the user is reduced.
S120, selecting target special effect materials for the target video frame from the pre-downloaded video special effect materials, and synthesizing the target special effect materials to obtain a special effect image.
In this embodiment, target special effect materials, such as a filter, a sticker, a beauty special effect, and the like, selected by a user for a target video frame according to the preference of the user are received, and the target special effect materials are added to a default position of the target video frame or a designated position selected by the user. In another embodiment, the target special effect material is superposed on a specified position according to the contour information of the image presented in the target video frame, if the target video frame presents a girl head, the target special effect material 'hat' is superposed on the girl head, the target special effect material, such as the date '2020.07.14', is added to the lower right corner of the target video frame, and the target special effect material is synthesized to obtain the special effect image.
S130, synthesizing the special effect image and the target video frame to obtain a video cover.
The special effect image and the target video frame are synthesized on a video content editing interface to obtain a video cover with a special effect pattern, and the picture of the video cover is added to a video track of the special effect video and is positioned in front of the special effect video, so that the video cover is displayed first when the special effect video is played.
If the target video frame is one frame, the target video frame and the corresponding special effect image are synthesized to obtain one video cover, the video cover is a static video cover, if the target video frame is a plurality of frames, each target video frame and the corresponding special effect image are synthesized to obtain a plurality of video covers, the video covers are static video covers, and further, the plurality of video covers are used for generating dynamic special effect covers through a dynamic image generator, so that the expressive force of the video covers is improved.
According to the video cover generation method provided by the embodiment, the relevant video special effect material is downloaded in advance after the video content editing interface is entered, the downloading is not required during the cover making, the target video frame is selected when the original video is previewed, the target video frame and the corresponding special effect image are synthesized to obtain the video cover, and one frame is selected as the cover after the whole original video is prepared into the special effect video, so that the process of generating the video cover is simpler and faster, and the time for a user to obtain the video cover is shortened.
It should be noted that the technical scheme of the application is not only suitable for the case of synthesizing a special effect video by using original videos, but also suitable for the case of synthesizing a special effect video by using original images, under the condition of synthesizing the special effect video by using the original images and a video content editing interface, a target image is selected as a target video frame when each original image is previewed, a target special effect material is selected for the target video frame from video special effect materials downloaded in advance, and the target special effect material is synthesized to obtain a special effect image; the method includes the steps that a special effect image and a target video frame are synthesized to obtain a video cover, and it needs to be explained that the synthesizing process of the video cover can be synchronously performed on the synthesis of the special effect video, so that one frame is selected as the cover after a plurality of original images are prepared into the special effect video, the process of generating the video cover is simpler and faster, and the time for a user to obtain the video cover is shortened.
In order to make the technical solution clearer and easier to understand, specific implementation processes and modes of a plurality of steps in the technical solution are described in detail below.
In an embodiment, after the transparent layer and the target special effect material are synthesized to obtain a special effect image in step S130, a feature identifier may be added to the special effect image, a corresponding relationship between the feature identifier and the target video frame is established, and the special effect image is stored in a specified directory.
For example, the feature identifier is used to distinguish different special effect images, each feature image has a unique feature identifier, a correspondence relationship is established between a special effect image and a target video frame through the feature identifier, for example, the feature identifier corresponding to the special effect image 1 is a001, a correspondence relationship between a001 and the target video frame 1 is established, and the special effect image is stored in a specified directory. Through the corresponding relation, the special effect image corresponding to the target video frame can be inquired.
Further, the step S130 of synthesizing the special effect image and the target video frame to obtain a video cover may include the following steps:
and S1301, searching a special effect image corresponding to the target video frame from the specified directory according to the corresponding relation.
Determining the feature identifier corresponding to the target video frame according to the corresponding relation between the feature identifier and the target video frame, determining the corresponding special effect image according to the feature identifier, searching the special effect image corresponding to the feature identifier from the specified directory of the stored special effect image, and reading the special effect image.
S1302, combining the special effect image and the target video frame corresponding to the special effect image into a special effect cover.
And rendering a special effect image on the upper layer of the target video frame after reading the special effect image, and synthesizing the special effect image and the target video frame to obtain a special effect cover.
In an embodiment, the step S120 of selecting a target special effect material for the target video frame from the video special effect materials downloaded in advance, and synthesizing the target special effect material into a special effect image may include the following steps:
and S1201, adding a transparent layer on the target video frame.
In this embodiment, a transparent layer is added to the target video frame, so that the image information on the target video frame can be presented through the transparent layer. Optionally, the size of the transparent layer is the same as the size of the target video frame, or the size of the transparent layer is slightly larger than the size of the target video frame, so that the transparent layer can completely cover the target video frame.
S1202, pre-downloaded video special effect materials are obtained, target special effect materials selected by a user for the target video frame are obtained, and the target special effect materials are placed at the appointed positions of the transparent layer respectively.
And acquiring a pre-downloaded video special effect material, and displaying the video special effect material to a user in a pop-up window mode. The user clicks on the pattern of the video special effects material to select the target special effects material. Optionally, each time a pattern of a video special effect material is clicked, the corresponding video special effect material is presented at a specified position of a target video frame; optionally, the user may continuously click on patterns of different video special effect materials, and after receiving the determination instruction, the multiple selected video special effect materials are once presented at the designated position of the target video frame.
In this embodiment, the specified position may refer to a default position corresponding to the target special effect material, for example, the default position corresponding to the text material is a lower right corner of the target video frame, or a target position corresponding to a type of the target special effect material identified according to the contour information of the image of the target video frame, for example, a head of a person corresponding to the image special effect material "hat", or the like.
And respectively placing the target special effect materials at specified positions of the transparent layer, wherein the specified positions are related to the shape, the size and the image content information of the target special effect frame, and determining whether the target special effect materials are already placed at the corresponding positions relative to the target video frame through the transparent layer.
And S1203, synthesizing the transparent layer and the target special effect material to obtain a special effect image.
And synthesizing the transparent layer and the target special effect materials placed on the transparent image to obtain a special effect image, wherein the special effect image can be a bitmap image, and the positions of the target special effect materials in the special effect image are relatively fixed.
In one embodiment, the target video frames are multiple, and correspondingly, the special effect images are multiple. In this embodiment, the target video frame corresponds to one or more special effect images, and the target video frame image and the corresponding special effect image are synthesized to obtain a special effect cover.
Further, in the scenario based on a plurality of target video frames, the synthesizing the special effect image and the target video frame in step S130 to obtain a video cover may include:
and S1301, respectively synthesizing each special effect image and the corresponding target video frame to obtain a video cover, and sequentially inputting each special effect cover into a motion picture generator to generate a dynamic special effect cover.
In this embodiment, each special effect image corresponds to a unique target video frame, and when each image is synthesized with its corresponding target video frame, a plurality of special effect cover pages are obtained, and for each special effect cover page, a still video cover page is obtained. And inputting the special effect covers into a dynamic image generator according to the time sequence for synthesis to generate dynamic special effect covers. In this embodiment, the effect cover may be input to the motion picture generator to generate a dynamic effect cover by using a GIF (Graphics Interchange Format) technology for generating a dynamic image from a plurality of images by using an existing motion picture generator, which is not described herein again.
In order to more clearly illustrate the present application, the present technical solution is described with reference to the following fig. 5 and 6:
generation of (a) still video cover
Fig. 5 is a flowchart of a method for generating a still video cover according to an embodiment, and as shown in fig. 5, the method may include the following steps:
s201, entering a small video content editing interface.
The embodiment is applied to small video production, and the static video cover is prepared on the small video content editing interface.
S202, downloading the video special effect material in advance.
After entering a small video content editing interface, relevant video special effect materials such as a filter, a sticker, a beauty special effect and the like are downloaded in advance, so that the problem that the waiting time of a user is increased due to the fact that downloading is carried out only when the relevant materials are used is avoided.
S203, the user previews the original video, and selects a certain frame of video from the original video in advance as a target video frame.
And S204, screen capture and storage of the frame of video to obtain a bitmap image corresponding to the frame of video.
The frame of video saved by the user screenshot is a target video frame which is used for preparing a video cover.
S205, the special effect materials selected by the user aiming at the frame of video are independently placed on a transparent layer and another bitmap image is generated.
In this embodiment, the special effect material is separately placed on the transparent layer to generate a special effect image, and the special effect image is a bitmap image, which is convenient for subsequent synthesis of a video cover.
S206, the target video frame and the special effect image are determined to be synthesized.
S207, judging whether the synthesis is successful, and if so, executing a step S208; otherwise, return to step S203.
And S208, generating a static video cover and uploading the static video cover.
According to the method for generating the static video cover, after a small video content editing interface is entered, relevant video special effect materials are downloaded in advance, a target video frame is selected when an original video is previewed, the target video frame and a corresponding special effect image are synthesized to obtain the static video cover, and one frame is selected as the cover after the whole original video is prepared into the special effect video without waiting, so that the process of generating the video cover is simpler and faster, and the time for a user to make the video cover is shortened.
(II) Generation of dynamic Special Effect covers
Fig. 6 is a flowchart of a dynamic special effect cover generation method according to an embodiment, and as shown in fig. 6, the method may include the following steps:
and S301, entering a small video content editing interface.
The embodiment is applied to small video production, and a dynamic special effect cover is prepared on a small video content editing interface.
S302, video special effect materials are downloaded in advance.
After entering a small video content editing interface, relevant video special effect materials such as a filter, a sticker, a beauty special effect and the like are downloaded in advance, so that the phenomenon that the downloading is carried out when the relevant materials are used is avoided, and the waiting time of a user is prolonged.
S303, previewing the original video by the user, and selecting a certain frame of video from the original video in advance as a target video frame.
S304, when a user selects a certain frame of video of the original video, capturing a plurality of frames of video at a preset time interval in the backward direction of the frame of video to generate bitmap images corresponding to the plurality of videos.
The frame of video saved by the user screenshot is a target video frame which is used for preparing a video cover.
S305, respectively and independently placing the special effect materials selected by the user for each frame of video on one transparent layer and correspondingly generating another bitmap image.
In this embodiment, the special effect material is separately placed on the transparent layer to generate a special effect image, and the special effect image is a bitmap image, which is convenient for subsequent synthesis of a video cover.
S306, placing the special effect images corresponding to the target video frames on the upper layers of the corresponding target video frames for synthesis to obtain a plurality of static video covers.
S307, synthesizing the static video cover or each target video frame and the corresponding special effect image through the motion picture generator.
S308, judging whether the synthesis is successful, and if so, executing the step S309; otherwise, return to step S303.
And S309, generating a dynamic special effect cover, and uploading the dynamic special effect cover.
According to the method for generating the dynamic special effect cover, after a small video content editing interface is entered, relevant video special effect materials are downloaded in advance, multiple frames of target video frames are selected when an original video is previewed, each target video frame and a corresponding special effect image are synthesized through a motion picture generator, the dynamic special effect cover is obtained, it is not needed to wait for the whole original video to be prepared into the special effect video and then select a certain frame as the cover, the cover obtained in the mode is a static video cover, the expressive force is lacked, the cover generating process of the embodiment is quick and convenient, the generated dynamic special effect cover has higher expressive force and attractive force, and the time of a user for making the video cover is shortened.
The following describes in detail embodiments of the video cover production apparatus.
Fig. 7 is a schematic structural diagram of a video cover generation apparatus according to an embodiment, where the video cover generation apparatus is executable on a video cover generation device, such as a client.
Specifically, as shown in fig. 7, the video cover generating apparatus 100 includes: a video frame selection module 110, a special effects image composition module 120, and a video cover composition module 130.
The video frame selection module 110 is configured to preview an original video in a video content editing interface and select a target video frame from the original video according to a selection operation of a user in a previewing process;
a special effect image synthesizing module 120, configured to select a target special effect material for the target video frame from pre-downloaded video special effect materials, and synthesize a special effect image from the target special effect material;
and a video cover synthesizing module 130, configured to synthesize the special effect image and the target video frame to obtain a video cover.
The video cover generation device provided by the embodiment downloads related video special effect materials in advance after entering a video content editing interface, does not need to download when making a cover, selects a target video frame when previewing an original video, synthesizes the target video frame and a corresponding special effect image to obtain a video cover, and does not need to wait for the whole original video to be prepared into a special effect video and then selects one frame as the cover, so that the process of generating the video cover is simpler and faster, and the time for making the video cover by a user is shortened.
In one embodiment, the video frame selection module 110 includes: a video playing unit and an image selecting unit;
the video playing unit is used for playing the original video in a previewing mode under a video content editing interface;
the image selection unit is used for selecting a first frame image from the original video according to selection operation of a user, acquiring a plurality of second frame images at preset time intervals according to the first frame image, and collecting the first frame image and the second frame images to obtain a target video frame.
In one embodiment, the special effect image synthesis module 120 includes: the image layer adding unit, the material placing unit and the image synthesizing unit;
a layer adding unit, configured to add a transparent layer on the target video frame;
the material placing unit is used for acquiring pre-downloaded video special effect materials, acquiring target special effect materials selected by a user for the target video frames, and placing the target special effect materials at the specified positions of the transparent layer respectively;
and the image synthesis unit is used for synthesizing the transparent layer and the target special effect material to obtain a special effect image.
In one embodiment, the video cover generating apparatus 100 further comprises: the relation establishing module is used for adding a characteristic identifier for the special effect image, establishing a corresponding relation between the characteristic identifier and the target video frame, and storing the special effect image in a specified directory;
the video cover composition module 130 includes: an image searching unit and a cover synthesizing unit;
the image searching unit is used for searching a special effect image corresponding to the target video frame from the specified directory according to the corresponding relation;
and the cover synthesizing unit is used for synthesizing the special effect image and the target video frame corresponding to the special effect image into a special effect cover.
In one embodiment, the target video frame is multiple, and the special effect image is multiple;
and the video cover synthesizing module is used for synthesizing each special effect image and the corresponding target video frame to obtain a special effect cover, and sequentially inputting each special effect cover into the dynamic image generator to generate a dynamic special effect cover.
In one embodiment, the video cover generation apparatus 100 further includes: and the material downloading module is used for downloading the required video special effect material from the special effect material library in advance after entering a video content editing interface.
In one embodiment, the video effect material includes one or more of a filter, a sticker, a text style, or a beauty effect.
The video cover generation device of the embodiment of the present disclosure can execute the video cover generation method provided by the embodiment of the present disclosure, and the implementation principle is similar, the actions executed by each module in the video cover generation device in each embodiment of the present disclosure correspond to the steps in the video cover generation method in each embodiment of the present disclosure, and for the detailed function description of each module of the video cover generation device, reference may be specifically made to the description in the corresponding video cover generation method shown in the foregoing, and details are not repeated here.
The embodiment of the present invention further provides a computer device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the program, the video cover generation method in any of the above embodiments is implemented.
When the computer device provided by the above embodiment executes the video cover generation method provided by any of the above embodiments, the computer device has corresponding functions and beneficial effects.
An embodiment of the present invention provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for video cover generation, including:
previewing an original video and selecting a target video frame from the original video according to the selection operation of a user in the previewing process under a video content editing interface;
selecting target special effect materials for the target video frame from the pre-downloaded video special effect materials, and synthesizing the target special effect materials to obtain a special effect image;
and synthesizing the special effect image and the target video frame to obtain a video cover.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the operation of the video cover generation method described above, and has corresponding functions and advantages.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the video cover generation method according to any embodiment of the present invention.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps. The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
Claims (8)
1. A video cover generation method is characterized by comprising the following steps:
playing an original video in a preview mode under a video content editing interface;
in response to a first selection operation for at least one frame of video frames in the original video, determining a video frame corresponding to the first selection operation as a target video frame, including: selecting a first frame image from the original video according to a first selection operation of a user, acquiring a plurality of second frame images at preset time intervals according to the first frame image, and collecting the first frame image and the second frame images to obtain a target video frame; or, gathering the multi-frame images sequentially selected by the user when the original video is played to obtain a target video frame;
responding to a second selection operation triggered based on the target video frame in the video special effect materials downloaded in advance, and synthesizing the target special effect materials corresponding to the second selection operation to obtain a special effect image;
adding a feature identifier for the special effect image, establishing a corresponding relation between the feature identifier and the target video frame, and storing the special effect image in a specified directory;
synthesizing the special effect image and the target video frame to obtain a video cover, comprising:
searching a special effect image corresponding to the target video frame from the specified directory according to the corresponding relation;
and synthesizing the special effect image and the target video frame corresponding to the special effect image into a special effect cover.
2. The method of claim 1, wherein the step of synthesizing a target special effect material corresponding to a second selection operation triggered based on the target video frame in response to the second selection operation among the pre-downloaded video special effect materials to obtain a special effect image comprises:
adding a transparent layer on the target video frame;
acquiring video special effect materials downloaded in advance, acquiring target special effect materials selected by a user for the target video frames, and respectively placing the target special effect materials at the specified positions of the transparent layer;
and synthesizing the transparent layer and the target special effect material to obtain a special effect image.
3. The method of claim 1, wherein the target video frame is plural, and the special effect image is plural;
the synthesizing the special effect image and the target video frame to obtain a video cover comprises:
and respectively synthesizing each special effect image and the corresponding target video frame to obtain a special effect cover, and sequentially inputting each special effect cover into a dynamic image generator to generate a dynamic special effect cover.
4. The method of video cover generation as claimed in claim 1, further comprising, prior to selecting the target video frame:
and downloading the required video special effect materials from the special effect material library in advance after entering a video content editing interface.
5. The method of claim 1, wherein the video effect material comprises one or more of a filter, a sticker, a text style, or a beauty effect.
6. A video cover creation device, comprising:
the video frame selection module comprises a video playing unit and an image selection unit; the video playing unit is used for playing an original video in a preview mode under a video content editing interface; the image selection unit is used for responding to a first selection operation aiming at least one frame of video frame in the original video, determining the video frame corresponding to the first selection operation as a target video frame, and comprises: selecting a first frame image from the original video according to a first selection operation of a user, acquiring a plurality of second frame images at preset time intervals according to the first frame image, and collecting the first frame image and the second frame images to obtain a target video frame; or, gathering the multi-frame images sequentially selected by the user when the original video is played to obtain a target video frame;
the special effect image synthesis module is used for responding to a second selection operation triggered based on the target video frame in a pre-downloaded video special effect material, and synthesizing the target special effect material corresponding to the second selection operation to obtain a special effect image; adding a feature identifier for the special effect image, establishing a corresponding relation between the feature identifier and the target video frame, and storing the special effect image in a specified directory;
a video cover synthesizing module, configured to synthesize the special effect image and the target video frame to obtain a video cover, and specifically configured to: searching a special effect image corresponding to the target video frame from the specified directory according to the corresponding relation; and synthesizing the special effect image and the target video frame corresponding to the special effect image into a special effect cover.
7. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing the program implements the steps of the video cover generation method of any of claims 1-5.
8. A storage medium containing computer-executable instructions for performing the steps of the video cover generation method of any of claims 1-5 when executed by a computer processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010746773.XA CN111935505B (en) | 2020-07-29 | 2020-07-29 | Video cover generation method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010746773.XA CN111935505B (en) | 2020-07-29 | 2020-07-29 | Video cover generation method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111935505A CN111935505A (en) | 2020-11-13 |
CN111935505B true CN111935505B (en) | 2023-04-14 |
Family
ID=73315303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010746773.XA Active CN111935505B (en) | 2020-07-29 | 2020-07-29 | Video cover generation method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111935505B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111417015A (en) * | 2020-04-22 | 2020-07-14 | 永城职业学院 | Method for synthesizing computer video |
CN113518233A (en) * | 2021-03-22 | 2021-10-19 | 广州方硅信息技术有限公司 | Cover display method and device, electronic equipment and storage medium |
CN113157973A (en) * | 2021-03-29 | 2021-07-23 | 广州市百果园信息技术有限公司 | Method, device, equipment and medium for generating cover |
CN113542594B (en) * | 2021-06-28 | 2023-11-17 | 惠州Tcl云创科技有限公司 | High-quality image extraction processing method and device based on video and mobile terminal |
CN113784152A (en) * | 2021-07-20 | 2021-12-10 | 阿里巴巴达摩院(杭州)科技有限公司 | Video processing method and storage medium |
CN113794890B (en) * | 2021-07-30 | 2023-10-24 | 北京达佳互联信息技术有限公司 | Data processing method, device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006180231A (en) * | 2004-12-22 | 2006-07-06 | Sony Corp | Image editing device and video editing device |
CN108989609A (en) * | 2018-08-10 | 2018-12-11 | 北京微播视界科技有限公司 | Video cover generation method, device, terminal device and computer storage medium |
CN109040615A (en) * | 2018-08-10 | 2018-12-18 | 北京微播视界科技有限公司 | Special video effect adding method, device, terminal device and computer storage medium |
CN109257645A (en) * | 2018-09-11 | 2019-01-22 | 传线网络科技(上海)有限公司 | Video cover generation method and device |
CN110475150A (en) * | 2019-09-11 | 2019-11-19 | 广州华多网络科技有限公司 | The rendering method and device of virtual present special efficacy, live broadcast system |
CN110572717A (en) * | 2019-09-30 | 2019-12-13 | 北京金山安全软件有限公司 | Video editing method and device |
-
2020
- 2020-07-29 CN CN202010746773.XA patent/CN111935505B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006180231A (en) * | 2004-12-22 | 2006-07-06 | Sony Corp | Image editing device and video editing device |
CN108989609A (en) * | 2018-08-10 | 2018-12-11 | 北京微播视界科技有限公司 | Video cover generation method, device, terminal device and computer storage medium |
CN109040615A (en) * | 2018-08-10 | 2018-12-18 | 北京微播视界科技有限公司 | Special video effect adding method, device, terminal device and computer storage medium |
CN109257645A (en) * | 2018-09-11 | 2019-01-22 | 传线网络科技(上海)有限公司 | Video cover generation method and device |
CN110475150A (en) * | 2019-09-11 | 2019-11-19 | 广州华多网络科技有限公司 | The rendering method and device of virtual present special efficacy, live broadcast system |
CN110572717A (en) * | 2019-09-30 | 2019-12-13 | 北京金山安全软件有限公司 | Video editing method and device |
Also Published As
Publication number | Publication date |
---|---|
CN111935505A (en) | 2020-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111935505B (en) | Video cover generation method, device, equipment and storage medium | |
CN111935504B (en) | Video production method, device, equipment and storage medium | |
EP3758364B1 (en) | Dynamic emoticon-generating method, computer-readable storage medium and computer device | |
RU2018118194A (en) | The way to record, edit and recreate a computer session | |
CN111930994A (en) | Video editing processing method and device, electronic equipment and storage medium | |
CN112422831A (en) | Video generation method and device, computer equipment and storage medium | |
CN113452941B (en) | Video generation method and device, electronic equipment and storage medium | |
US11394888B2 (en) | Personalized videos | |
CN105279203B (en) | Method, device and system for generating jigsaw puzzle | |
CN113099287A (en) | Video production method and device | |
CN111930979A (en) | Image processing method, device, equipment and storage medium | |
CN111526427B (en) | Video generation method and device and electronic equipment | |
CN113099288A (en) | Video production method and device | |
CN113918522A (en) | File generation method and device and electronic equipment | |
CN114693827A (en) | Expression generation method and device, computer equipment and storage medium | |
CN114091422A (en) | Display page generation method, device, equipment and medium for exhibition | |
CN111951353A (en) | Electronic album synthesis method, device, equipment and storage medium | |
US9122923B2 (en) | Image generation apparatus and control method | |
CN115904168A (en) | Multi-device-based image material processing method and related device | |
CN108875670A (en) | Information processing method, device and storage medium | |
CN115515006A (en) | Video processing method and device, electronic equipment and storage medium | |
CN108600614A (en) | Image processing method and device | |
CN114222069A (en) | Shooting method, shooting device and electronic equipment | |
EP4401398A1 (en) | Video editing method and device | |
CN115334242B (en) | Video recording method, device, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |